Problem is attached in word document

Problem is attached in word document

Problem is attached in word document
Add to this portfolio the design of an algorithm that compares one picture with another using dynamic programming. Your algorithms group has been tasked with creating an app that performs special operations on images. Specifically, your app will compare one black-and-white image into another black-and-white image. There are a number of methods that can be used to perform this task, but your group has agreed that using dynamic programming is a fast and elegant scheme to solve this problem. Assignment Design an algorithm (using pseudocode) that takes in as an input, two 2-D int arrays that are assumed to be 2 black-and-white images: initialImage x, whose dimensions are IxJ, and finalImage y, whose dimensions are IxK. The algorithm will compare x to the y, row-by-row, as defined below. Your algorithm will employ a dynamic programming scheme to compare X to Y identifying the minimal difference between each row. Because you are working with black-and-white images only, you should assume that each image is a 2-D int array consisting of 2 possible values: 0 or 1, where 0 represents black and 1 represents white. Thus, this 2-D grid of 0 and 1 values comprise a 2-D black-and-white image. Each row of this image is then simply a 1-D int array filled with either 0s or 1s. Therefore, you must define how you will measure the difference between the strings of 0s and 1s in each row. Remember that you will do the comparison one row in the images at a time. First, compare X1,* to Y1,*. (Here X1,* is the first row in image X and Y1,* is the first row in image Y ). Next, compare X2 to Y2… Each one of these comparisons will require the construction of a D (distance) matrix. In the following example, the first row of X is X1,*, and the first row of Y is Y1,* = 00110. Use the following recurrence relation to develop your pseudocode: After the D matrix is completed, the minimum number in the bottom row is the minimal mismatch for this row. You will assign this value to the variable minVali. This number tells how different row X1,* is from row Y1,* . You will then repeat this comparison for all rows i and aggregate the difference when complete into variable totalDifference = Si minVali. As a result, the algorithm will compare the total difference to a threshold value called thresh. If total value is above the threshold, the images are declared different; otherwise, they are declared to be similar images. You can assume that the thresh variable is supplied as an input to your algorithm. Part 1 Create a portfolio that includes all previous IPs. Part 2a Design pseudocode for the image comparison algorithm discussed above, given input Images X, Y, and thresh. The output is a declaration: The images are similar, or The images are different. Part 2b Discuss the optimality of the dynamic programming solution. Discuss the time complexity of this algorithm in terms of the size of the inputs X and Y.
Problem is attached in word document
Ameer Dynamic Programming or DP is an algorithm technique to solve problems by breaking them into smaller subproblems. For example, Fibonacci numbers, as you may know, that Fibonacci numbers are a string of numbers, and each number is the addition of the two previous numbers. The first few Fibonacci numbers are 0, 1, 1, 2, 3, 5, and 8, continuing from there, and if we are asked to compute the nth Fibonacci number, we can do that with the following equation, Fib(n) = Fib(n-1) + Fib(n-2), for n > 1. As we saw in the example above, if we want to resolve the problem (Fib(n)), we have to break it down into smaller ones (Fib(n-1) + Fib(n-2)). As Bellman (2015) defined Principle of Optimality as an optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy concerning the state resulting from the first decision. This principle relates to the computer algorithms like divide and conquer and greedy. In contrast, Dynamic programming is the technique that divides the solutions into smaller ones or sub-problems. Then, the subproblem that is solved is saved and considered optimal. These approaches are beneficial when the subproblems tend to overlap with another version of another subproblem. One of the most significant benefits of this technique is that it reduces the number of iterations as it designates the optimal decisions in earlier stages. Similar to divide and conquer when we divide the problem into subproblems and then combine the solutions to reach the best solution. Dynamic programs split its problem to ensure that it selects the best decision rather than a pre-decided one. It develops a solution for any problem, proves optimality’s principle, writes a recurrence to combine all the subproblems, and then writes an algorithm. The main problem should follow the principle of optimality to use dynamic programming to resolve. The remaining should be optimized regarding the state following the first decision. That said, dynamic programming starts from the bottom up by making sub solutions. As we learned previously on the greedy algorithm, an optimization approach is handpicking individual choices. We also learned that the greedy calculates its choices forward and never doubles back to make the decision again. Therefore, we can say that dynamics programming tries different approaches before reaching the final decision. The Bellman equation is defined recursively and solved backward. The principle of optimality can be written as fN(x)=max.[r(dn)+fN−1T(x,dn)]dn∈x . So, for example, the traveling salesman. The main problem is that we need to take the shortest path to the destination. However, due to the number of constraints, possible conclusions, and nonlinearity of the problem setup, the traveling salesmen problem is notoriously tricky to solve. Therefore, we found a more efficient solution using dynamic programming because the basic premise is to break the problem into simpler subproblems. Samichyya Richard Bellman introduced the theory of optimality, which states that an optimal direction has the property that, whatever the initial conditions and control variables (choices) over a given initial period, the control (or decision variables) chosen over the remaining duration must be optimal for the remaining problem, with the state arising from the early stages (Moffatt, 2018). Dynamic Programming (Dynamic Programming) is an optimization principle that is commonly utilized in job management. The United States invented it. Richard Bellman was a mathematician in 1950. In the term adaptive programming, the word programming refers to scheduling (Darji, 2017). Dynamic Programming is a way for dealing with multiple problems at the same time. In this technique, each sub-problem is only solved once. The results of each sub-problem are entered into a table, which may then be used to answer the original problem. In Dynamic Computing Duplication, the solution is completely avoided. The divide and conquer strategy, on the other hand, effectively employs the bottom-up problem-solving method. Dynamic Programming is a problem-solving algorithm that frequently minimizes or maximizes the value of a variable. Problems are solved with Dynamic Programming by integrating answers to sub-problems such as divide and conquer. In contrast to divide and conquer, sub-problems are not separate. The answer to one sub-issue may or may not have any bearing on the solutions to other sub-problems in the same problem. Sub-problems can swap places with one another. By solving sub-problems from the bottom up, Dynamic Programming eliminates computation. When a sub-problem is solved for the first time, save the solution. Look for a remedy if the problem arises again. The main idea is to avoid reevaluation by preserving answers to subproblems that overlap. Steps in dynamic programming 1. Create a structure for optimality solutions. 2. Determine the value of the best solution in a recursive manner. 3. Using either a top-down or bottom-up approach, calculate the optimality solution values in a table. 4. Calculate the most cost-effective solution. Because of the features of dynamic programming, an instance can be solved with fewer instances solutions. For a smaller example, the answers may be needed several times, thus keep the results in a table. As a result, each smaller instance is only solved once. Time is saved by utilizing additional space. Optimality Principle, Obtain the answer that has the best definition. In an ideal sequence of decisions or choices, that series should also be efficient. If the rule of optimality cannot be implemented, it is nearly impossible to discover a solution using the complicated programming method. The optimum concept, for example, is used to discover the shortest path in a graph. Dynamic programming can be used to handle a variety of problems, including the calculation of the binomial coefficient, problem solving, assembly line planning, the Knapsack problem, shortest path, and matrix chain multiplication.