if you want to remove an article from website contact us from top.

# the reduction of the solution to a problem into the the same form of solutions to sub-problems are normally referred to as

Category :

### Mohammed

Guys, does anyone know the answer?

get the reduction of the solution to a problem into the the same form of solutions to sub-problems are normally referred to as from screen.

## Decrease and Conquer

A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. ## Decrease and Conquer

Difficulty Level : Easy

Last Updated : 16 Jan, 2018

As divide-and-conquer approach is already discussed, which include following steps:

Divide the problem into a number of subproblems that are smaller instances of the same problem.Conquer the sub problems by solving them recursively. If the subproblem sizes are small enough, however, just solve the sub problems in a straightforward manner.Combine the solutions to the sub problems into the solution for the original problem.

Similarly, the approach decrease-and-conquer works, it also include following steps:

Decrease or reduce problem instance to smaller instance of the same problem and extend solution.Conquer the problem by solving smaller instance of the problem.Extend solution of smaller instance to obtain solution to original problem .

Basic idea of the decrease-and-conquer technique is based on exploiting the relationship between a solution to a given instance of a problem and a solution to its smaller instance. This approach is also known as incremental or inductive approach.

“Divide-and-Conquer” vs “Decrease-and-Conquer”:

As per Wikipedia, some authors consider that the name “divide and conquer” should be used only when each problem may generate two or more subproblems. The name decrease and conquer has been proposed instead for the single-subproblem class. According to this definition, Merge Sort and Quick Sort comes under divide and conquer (because there are 2 sub-problems) and Binary Search comes under decrease and conquer (because there is one sub-problem).

Implementations of Decrease and Conquer :

This approach can be either implemented as top-down or bottom-up.

Top-down approach : It always leads to the recursive implementation of the problem.Bottom-up approach : It is usually implemented in iterative way, starting with a solution to the smallest instance of the problem.Variations of Decrease and Conquer :

There are three major variations of decrease-and-conquer:

Decrease by a constant

Decrease by a constant factor

Variable size decrease

Decrease by a Constant : In this variation, the size of an instance is reduced by the same constant on each iteration of the algorithm. Typically, this constant is equal to one , although other constant size reductions do happen occasionally. Below are example problems :

Insertion sort

Graph search algorithms: DFS, BFS

Topological sorting

Algorithms for generating permutations, subsets

Decrease by a Constant factor: This technique suggests reducing a problem instance by the same constant factor on each iteration of the algorithm. In most applications, this constant factor is equal to two. A reduction by a factor other than two is especially rare.

Decrease by a constant factor algorithms are very efficient especially when the factor is greater than 2 as in the fake-coin problem. Below are example problems :

Binary search Fake-coin problems

Russian peasant multiplication

Variable-Size-Decrease : In this variation, the size-reduction pattern varies from one iteration of an algorithm to another.

As, in problem of finding gcd of two number though the value of the second argument is always smaller on the right-handside than on the left-hand side, it decreases neither by a constant nor by a constant factor. Below are example problems :

Computing median and selection problem.

Interpolation Search

Euclid’s algorithm

There may be a case that problem can be solved by decrease-by-constant as well as decrease-by-factor variations, but the implementations can be either recursive or iterative. The iterative implementations may require more coding effort, however they avoid the overload that accompanies recursion.

Reference :

Anany Levitin

Decrease and conquer स्रोत : www.geeksforgeeks.org

## Divide-and-conquer algorithm

In computer science, divide and conquer is an algorithm design paradigm. A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.

The divide-and-conquer technique is the basis of efficient algorithms for many problems, such as sorting (e.g., quicksort, merge sort), multiplying large numbers (e.g., the Karatsuba algorithm), finding the closest pair of points, syntactic analysis (e.g., top-down parsers), and computing the discrete Fourier transform (FFT).

Designing efficient divide-and-conquer algorithms can be difficult. As in mathematical induction, it is often necessary to generalize the problem to make it amenable to a recursive solution. The correctness of a divide-and-conquer algorithm is usually proved by mathematical induction, and its computational cost is often determined by solving recurrence relations.

## Contents

1 Divide and conquer

2 Early historical examples

3.1 Solving difficult problems

3.2 Algorithm efficiency

3.3 Parallelism 3.4 Memory access

3.5 Roundoff control

4 Implementation issues

4.1 Recursion 4.2 Explicit stack 4.3 Stack size

4.4 Choosing the base cases

4.5 Sharing repeated subproblems

## Divide and conquer

Divide-and-conquer approach to sort the list (38, 27, 43, 3, 9, 82, 10) in increasing order. splitting into sublists; a one-element list is trivially sorted; composing sorted sublists.

The divide-and-conquer paradigm is often used to find an optimal solution of a problem. Its basic idea is to decompose a given problem into two or more similar, but simpler, subproblems, to solve them in turn, and to compose their solutions to solve the given problem. Problems of sufficient simplicity are solved directly. For example, to sort a given list of natural numbers, split it into two lists of about /2 numbers each, sort each of them in turn, and interleave both results appropriately to obtain the sorted version of the given list (see the picture). This approach is known as the merge sort algorithm.

The name "divide and conquer" is sometimes applied to algorithms that reduce each problem to only one sub-problem, such as the binary search algorithm for finding a record in a sorted list (or its analog in numerical computing, the bisection algorithm for root finding). These algorithms can be implemented more efficiently than general divide-and-conquer algorithms; in particular, if they use tail recursion, they can be converted into simple loops. Under this broad definition, however, every algorithm that uses recursion or loops could be regarded as a "divide-and-conquer algorithm". Therefore, some authors consider that the name "divide and conquer" should be used only when each problem may generate two or more subproblems. The name decrease and conquer has been proposed instead for the single-subproblem class.

An important application of divide and conquer is in optimization,[] where if the search space is reduced ("pruned") by a constant factor at each step, the overall algorithm has the same asymptotic complexity as the pruning step, with the constant depending on the pruning factor (by summing the geometric series); this is known as prune and search.

## Early historical examples

Early examples of these algorithms are primarily decreased and conquer – the original problem is successively broken down into subproblems, and indeed can be solved iteratively.

Binary search, a decrease-and-conquer algorithm where the subproblems are of roughly half the original size, has a long history. While a clear description of the algorithm on computers appeared in 1946 in an article by John Mauchly, the idea of using a sorted list of items to facilitate searching dates back at least as far as Babylonia in 200 BC. Another ancient decrease-and-conquer algorithm is the Euclidean algorithm to compute the greatest common divisor of two numbers by reducing the numbers to smaller and smaller equivalent subproblems, which dates to several centuries BC.

An early example of a divide-and-conquer algorithm with multiple subproblems is Gauss's 1805 description of what is now called the Cooley–Tukey fast Fourier transform (FFT) algorithm, although he did not analyze its operation count quantitatively, and FFTs did not become widespread until they were rediscovered over a century later.

An early two-subproblem D&C algorithm that was specifically developed for computers and properly analyzed is the merge sort algorithm, invented by John von Neumann in 1945.

Another notable example is the algorithm invented by Anatolii A. Karatsuba in 1960 that could multiply two -digit numbers in

{\displaystyle O(n^{\log _{2}3})}

operations (in Big O notation). This algorithm disproved Andrey Kolmogorov's 1956 conjecture that

{\displaystyle \Omega (n^{2})}

operations would be required for that task.

As another example of a divide-and-conquer algorithm that did not originally involve computers, Donald Knuth gives the method a post office typically uses to route mail: letters are sorted into separate bags for different geographical areas, each of these bags is itself sorted into batches for smaller sub-regions, and so on until they are delivered. This is related to a radix sort, described for punch-card sorting machines as early as 1929.

स्रोत : en.wikipedia.org

## Solving Problems With Dynamic Programming

Dynamic programming is a really useful general technique for solving problems that involves breaking down problems into smaller overlapping sub-problems, storing the results computed from the… ## Solving Problems With Dynamic Programming

John Wittenauer

May 9, 2016·10 min read

This content originally appeared on Curious Insight

Dynamic programming is a really useful general technique for solving problems that involves breaking down problems into smaller overlapping sub-problems, storing the results computed from the sub-problems and reusing those results on larger chunks of the problem. Dynamic programming solutions are pretty much always more efficent than naive brute-force solutions. It’s particularly effective on problems that contain optimal substructure.

Dynamic programming is related to a number of other fundamental concepts in computer science in interesting ways. Recursion, for example, is similar to (but not identical to) dynamic programming. The key difference is that in a naive recursive solution, answers to sub-problems may be computed many times. A recursive solution that caches answers to sub-problems which were already computed is called memoization, which is basically the inverse of dynamic programming. Another variation is when the sub-problems don’t actually overlap at all, in which case the technique is known as divide and conquer. Finally, dynamic programming is tied to the concept of mathematical induction and can be thought of as a specific application of inductive reasoning in practice.

While the core ideas behind dynamic programming are actually pretty simple, it turns out that it’s fairly challenging to use on non-trivial problems because it’s often not obvious how to frame a difficult problem in terms of overlapping sub-problems. This is where experience and practice come in handy, which is the idea for this blog post. We’ll build both naive and “intelligent” solutions to several well-known problems and see how the problems are decomposed to use dynamic programming solutions. The code is written in basic python with no special dependencies.

## Fibonacci Numbers

First we’ll look at the problem of computing numbers in the Fibonacci sequence. The problem definition is very simple — each number in the sequence is the sum of the two previous numbers in the sequence. Or, more formally:

F_n=F_n−1+F_n−2, with F_0=0 and F_1=1 as the seed values.

(note: Medium does not have the ability to render equations properly so I’m using the fairly hack-ish solution of displaying mathematical notation in italics…apologies if the true meaning doesn’t come through very well.)

Our solution will be responsible for calculating each of Fibonacci numbers up to some defined limit. We’ll first implement a naive solution that re-calculates each number in the sequence from scratch.

def fib(n):if n == 0:return 0if n == 1:return 1return fib(n - 1) + fib(n - 2)def all_fib(n):

fibs = []

for i in range(n):

fibs.append(fib(i))

return fibs

Let’s try it out on a pretty small number first.

%time print(all_fib(10))

[0, 1, 1, 2, 3, 5, 8, 13, 21, 34]

Wall time: 0 ns

Okay, probably too trivial. Let’s try a bit bigger…

%time print(all_fib(20))

[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181]

Wall time: 5 ms

The runtime was at least measurable now, but still pretty quick. Let’s try one more time…

%time print(all_fib(40))

[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 63245986]

Wall time: 1min 9s

That escalated quickly! Clearly this is a pretty bad solution. Let’s see what it looks like when applying dynamic programming.

def all_fib_dp(n):

fibs = []

for i in range(n):if i < 2:

fibs.append(i)

else:

fibs.append(fibs[i - 2] + fibs[i - 1])

return fibs

This time we’re saving the result at each iteration and computing new numbers as a sum of the previously saved results. Let’s see what this does to the performance of the function.

%time print(all_fib_dp(40))

[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 63245986]

Wall time: 0 ns

By not computing the full recusrive tree on each iteration, we’ve essentially reduced the running time for the first 40 numbers from ~75 seconds to virtually instant. This also happens to be a good example of the danger of naive recursive functions. Our new Fibonaci number function can compute additional values in linear time vs. exponential time for the first version.

%time print(all_fib_dp(100))

[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 63245986, 102334155, 165580141, 267914296, 433494437, 701408733, 1134903170, 1836311903, 2971215073L, 4807526976L, 7778742049L, 12586269025L, 20365011074L, 32951280099L, 53316291173L, 86267571272L, 139583862445L, 225851433717L, 365435296162L, 591286729879L, 956722026041L, 1548008755920L, 2504730781961L, 4052739537881L, 6557470319842L, 10610209857723L, 17167680177565L, 27777890035288L, 44945570212853L, 72723460248141L, 117669030460994L, 190392490709135L, 308061521170129L, 498454011879264L, 806515533049393L, 1304969544928657L, 2111485077978050L, 3416454622906707L, 5527939700884757L, 8944394323791464L, 14472334024676221L, 23416728348467685L, 37889062373143906L, 61305790721611591L, 99194853094755497L, 160500643816367088L, 259695496911122585L, 420196140727489673L, 679891637638612258L, 1100087778366101931L, 1779979416004714189L, 2880067194370816120L, 4660046610375530309L, 7540113804746346429L, 12200160415121876738L, 19740274219868223167L, 31940434634990099905L, 51680708854858323072L, 83621143489848422977L, 135301852344706746049L, 218922995834555169026L]

स्रोत : towardsdatascience.com

Do you want to see answer or more ?
Mohammed 7 month ago

Guys, does anyone know the answer?