JavaRush/Java Blog/Random EN/Grocking algorithms or a painless introduction to algorit...
Viacheslav
Level 3

Grocking algorithms or a painless introduction to algorithms

Published in the Random EN group
members
Review of the book "Grocking Algorithms". A little personal opinion, a few examples. I hope this review will help you understand whether you want to read this book or whether it will not take its place on your shelf. WARNING: Lots of text)

“Grocking Algorithms” or a Painless Introduction to Algorithms

Introduction

Almost any Junior level vacancy has requirements like “knowledge of data structures and algorithms.” For those who have a specialized education, algorithms are included in the general course and there should be no problems. But what if the development was brought in from other steppes? All that remains is to learn on your own. There is an answer to the question “who is to blame”, but to the question “what to do” the answer must be sought. Let's look in books. And I want to tell you about one.
“Grocking Algorithms” or a Painless Introduction to Algorithms - 1

Grok algorithms

Among all the works, I came across such a book as “Grocking Algorithms”. You can find out more here: “ The book “Growing Algorithms. An illustrated guide for programmers and the curious .” I noticed the book a long time ago, but on ozone it cost 680 rubles. Expensive or cheap - everyone decides for himself. I’m already buying the second book on Avito for half the price. So I found it in St. Petersburg, bought it, and went grokking. Which is what I decided to share with you. Yes, there is no Java code in the book, but there is... other code, but more on that later.

Introduction to Algorithms (Selection Sort)

So, in an easy form of narration, we reach the first sorting in our performance. This is Selection Sort. Its essence is that we go through the elements from left to right (from 0 element to the last), and look for the largest among the remaining elements. If we find it, then we swap the element on which we are now and the largest element. The simplest way to first think of an array is: [5, 3, 6, 2, 10]. Take a piece of paper, a pen (the simplest and most inexpensive way) and imagine how we have a left border (left), a current index (or right border), there is an index of the minimum element. And how we work with it. For example:
“Grocking Algorithms” or a Painless Introduction to Algorithms - 2
Algorithms are often described in pseudocode, for example, on Wikipedia. Ours is not exactly pseudocode, but more on that later. For now let's see:

def selectionSort(array):
    for left in range(0, len(array)):
        minIndex = left
        for right in range (left+1, len(array)):
            if array[right] < array[minIndex]:
                minIndex = right
        if minIndex != left:
            temp = array[left]
            array[left] = array[minIndex]
            array[minIndex] = temp
    return array

print(selectionSort([5, 3, 6, 2, 10]))
Now let's present it in the form of Java code:
public static void selectionSort(int[] array) {
        for (int left = 0; left < array.length; left++) {
            int minIndex = left;
            for (int right = left+1; right < array.length; right++) {
                if (array[right] < array[minIndex]) {
                    minIndex = right;
                }
            }
            if (minIndex != left) {
                int temp = array[left];
                array[left] = array[minIndex];
                array[minIndex] = temp;
            }
        }
}
As you can see, the code is almost the same. The first code is an example from the book. The second is my free execution in Java code.

Recursion

Next we are told that there is such a thing as recursion. First of all, there is a problem about a farmer who has a field of size AxB. It is necessary to divide this field into equal “squares”. And then after this the Euclid Algorithm is mentioned. What I don't like is that they didn't try to write its code. But the Euclid algorithm is simple and effective:
“Grocking Algorithms” or a Painless Introduction to Algorithms - 3
To be honest, I missed some details in the book, like in this video: “ Informatics. Theory of algorithms. Euclid's algorithm ." For example, if a is less than b, then during the first run b and a will change places and the second time the larger one will be divided by the smaller one. Therefore, the order of the arguments is not important. As usual, first we can “feel” the algorithm on a piece of paper:
“Grocking Algorithms” or a Painless Introduction to Algorithms - 4
Now let's look at the code:

def euclidean(a, b):
    if a == 0 : return b
    if b == 0 : return a
    return euclidean (b, a % b)
Let's write the same code in Java. If desired, we can use the online compiler :
public static int euclid(int a, int b) {
        if (a == 0) return b;
        if (b == 0) return a;
        return euclid(b, a%b);
}
Factorial was also mentioned at the beginning of the book. The factorial of a number n (n!) is the product of the numbers from 1 to n. Why do this? There is one practical application here. If we have n objects (for example, n cities), then we can make n of them! Combinations. You can read more about recursion here: “ Recursion. Training tasks .” Comparison of iterative and recursive approaches: " Recursion ".

Quick sort

Quick sort is a rather interesting algorithm. The book doesn't pay much attention to him. Moreover, the code is given only for the worst case, when the first element is selected. However, perhaps this example will be easier to remember for a first acquaintance. And it’s better to write a bad quicksort than not to write one at all. Here's an example from the book:

def quicksort(array):
    if len(array) < 2:
        return array
    else:
        pivot = array[0]
        less = [i for i in array[1:] if i <= pivot]
        greater = [i for i in array[1:] if i > pivot]
    return quicksort(less) + [pivot] + quicksort(greater)
Everything here is super simple. If we have an array of 0 or 1 element, there is no need to sort it. If it is greater, we take the first element of the array and consider it the “pivot element”. We create 2 new arrays - one contains elements larger than the pivot, and the second contains elements smaller. And we repeat recursively. Not the best option, but again, better remembered. Let's implement this algorithm in Java, but more correctly. The material from the review “ Computer Science in JavaScript: Quicksort ” will help us with this . And before writing the code, let’s draw again to “feel” the algorithm: First, let’s draw again on a piece of paper to understand the algorithm:
“Grocking Algorithms” or a Painless Introduction to Algorithms - 5
It seems to me that one of the most dangerous moments is to solve problems entirely. Therefore, we will carry out the implementation in several small steps:
  • We need to be able to swap elements in an array:

    private static void swap(int[] array, int firstIndex, int secondIndex) {
            int temp = array[firstIndex];
            array[firstIndex] = array[secondIndex];
            array[secondIndex] = temp;
    }

  • We need a method that divides the array in the specified interval into 3 parts


    private static int partition(int[] array, int left, int right) {
            int pivot = array[(right + left) / 2];
            while (left <= right) {
                while (array[left] < pivot) {
                    left++;
                }
                while (array[right] > pivot) {
                    right--;
                }
                if (left <= right) {
                    swap(array, left, right);
                    left++;
                    right--;
                }
            }
            return left;
    }

    Details at the link above. In short, we move the left cursor until the element is less than the pivot. Similarly, move the right cursor from the other end. And we do a swap if the cursors do not match. We continue until the cursors converge. We return an index that divides further processing into 2 parts.

  • There is a separation, we need the sorting itself:

    public static void quickSort(int[] array, int left, int right) {
            int index = 0;
            if (array.length > 1) {
                index = partition(array, left, right);
                if (left < index - 1) {
                    quickSort(array, left, index - 1);
                }
                if (index < right) {
                    quickSort(array, index, right);
                }
            }
    }

    That is, if the array consists of at least two elements, then they can already be sorted. First, we divide the entire array into two parts, elements smaller than the pivot and elements larger. Then we perform similar actions for each of the resulting parts.

    And for the test:

    public static void main(String []args){
            int[] array = {8,9,3,7,6,7,1};
            quickSort(array, 0, array.length-1);
            System.out.println(Arrays.toString(array));
    }
The book states that this algorithm belongs to the so-called “Divide and Conquer” algorithms, when the processed data set is divided in half each time. Algorithm complexity: O(nLogn)
“Grocking Algorithms” or a Painless Introduction to Algorithms - 6
What's bad (that is, what I didn't like) is that the book mentions merge sort in passing, but doesn't provide any example or code. More details can be found here: " Informatics. Search and sorting algorithms: Merge sort ". Therefore, for the sake of consistency, let's do it ourselves. The algorithm itself, of course, is inherently simple and straightforward:
public static void mergeSort(int[] source, int left, int right) {
    if ((right - left) > 1) {
        int middle = (right + left) / 2;
        mergeSort(source, left, middle);
        mergeSort(source, middle + 1, right);
    }
    merge(source, left, right);
}
We determine the middle and divide the array in half. For each half we do the same and so on. Stopping condition or base case - we must have more than one element, since we cannot divide one element into two. Now we need to implement the merger, that is, merge:
public static void merge(int[] array, int from, int to) {
    int middle = ((from + to) / 2) + 1;
    int left = from;
    int right = middle;
    int cursor = 0;

    int[] tmp = new int[to - from + 1];
    while (left < middle || right <= to) {
        if (left >= middle) {
            tmp[cursor] = array[right];
            System.out.println("Остаток справа: " + array[right]);
            right++;
        } else if (right > to) {
            tmp[cursor] = array[left];
            System.out.println("Остаток слева: " + array[left]);
            left++;
        } else if (array[left] <= array[right]) {
            tmp[cursor] = array[left];
            System.out.println("Слева меньше: " + array[left]);
            left++;
        } else if (array[right] < array[left]) {
            tmp[cursor] = array[right];
            System.out.println("Справа меньше: " + array[right]);
            right++;
        }
        cursor++;
    }
    System.arraycopy(tmp, 0, array, from, tmp.length);
}
There's not much to comment on here. From the names of the variables printlneverything is clear. Well, to check:
int array[] = {1, 7, 3, 6, 7, 9, 8, 4};
mergeSort(array, 0, array.length - 1);
System.out.println(Arrays.toString(array));

Hash tables

The book also touches on hash tables. You don’t have to implement it yourself, and the essence of hash tables is quite simple. After all, Java also has an implementation of hash tables, java.util.HashTable. If we look at the HashTable device, we will see that the Entry array lives inside. Entry is a record that is a combination of Key – Value. HashTable has initialCapacity - that is, the initial size. And loadFactor – load factor. Default is 0.75. This number tells you at what load of the array (number of elements/total number) the size needs to be increased. In Java it increases by 2 times. The book explains that Hash tables are called hash tables because based on the hash function, the array cell (basket) in which the Entry. You can also read more here: Data structures in pictures. HashMap and LinkedHashMap . You can also read it in books. For example here: " HashTable basics "

Graphs, Breadth First Search (shortest path search)

Probably one of the most interesting topics is graphs. And here, to be fair, the book pays a lot of attention to them. Maybe that's why it's worth reading this book. Although, perhaps, it could have been stated a little more clearly)) But, we have the Internet and in addition to the book, you can look at this playlist on the theory for “those who are hearing about graphs for the first time .” Well, naturally, at the very beginning of the book, the breadth-first search algorithm, breadth-first-searchalso known as BFS, is given. The book contains the following graph:
“Grocking Algorithms” or a Painless Introduction to Algorithms - 7
The book states that a queue will help us. Moreover, such that we can add elements to the end and process the queue from the beginning. Such queues are called two-way queues or Deque in English. The book suggests using a data structure - a hash table. To correlate the name and neighbors. With numbered vertices, you can simply use an array. This storage of vertices is called a “List of adjacent vertices,” which is not mentioned in the book. This is a minus for them. Let's implement this in Java:
private Map<String, String[]> getGraph() {
    Map<String, String[]> map = new HashMap<>();
    map.put("you", new String[]{"alice", "bob", "claire"});
    map.put("bob", new String[]{"anuj", "peggy"});
    map.put("alice", new String[]{"peggy"});
    map.put("claire", new String[]{"thom", "jonny"});
    map.put("annuj", null);
    map.put("peggy", null);
    map.put("thom", null);
    map.put("johny", null);
    return map;
}
Now the search itself, built on this data:
private String search() {
    Map<String, String[]> graph = getGraph();
    Set<String> searched = new HashSet<>();
    Deque<String> searchQue = new ArrayDeque<>();
    searchQue.add("you");
    while (!searchQue.isEmpty()) {
        String person = searchQue.pollFirst();
        System.out.println(person);
        if (personIsSeller(person)) {
            return person;
        } else {
            String[] friends = graph.get(person);
            if (friends == null) continue;
            for (String friend : friends) {
                if (friend != null && !searched.contains(friend)) {
                    searchQue.addLast(friend);
                }
            }
        }
    }
    return null;
}
As you can see, nothing complicated. If you compare it with the code from the book, it’s almost the same.

Graphs, Dijkstra's Algorithm

Having more or less understood BFS, the author of the book invites us to understand the Daysktra algorithm and weighted graphs. The following graph is proposed for the solution:
“Grocking Algorithms” or a Painless Introduction to Algorithms - 8
First, we need to understand how to represent our graphs. We could represent it as a matrix. An article on Habré will help us here: Dijkstra's algorithm. Finding optimal routes on a graph . Let's use the adjacency matrix:
public Integer[][] getGraphMatrix(int size) {
    Integer[][] matrix = new Integer[size][size];
    matrix[0][1] = 6;
    matrix[0][2] = 2;
    matrix[2][1] = 3;
    matrix[1][3] = 1;
    matrix[2][3] = 5;
    return matrix;
}
And now the logic itself:
@Test
public void dijkstra() {
    Integer[][] graph = getGraphMatrix();           // Данные графа
    Integer[] costs = new Integer[graph.length];    // Стоимость перехода
    Integer[] parents = new Integer[graph.length];  // Родительский узел
    BitSet visited = new BitSet(graph.length);      // "Ферма" маркеров посещённости

    Integer w = 0;
    do {
        System.out.println("-> Рассматриваем вершину: " + w);
        Integer min = null;
        for (int i = 0; i < graph.length; i++) {    // Обрабатываем каждую дугу
            if (graph[w][i] == null) continue;      // Дуги нет - идём дальше
            if (min == null || (!visited.get(i) && graph[w][min] > graph[w][i])) {
                min = i;
            }
            if (costs[i] == null || costs[i] > costs[w] + graph[w][i]) {
                System.out.print("Меням вес с " + costs[i]);
                costs[i] = (costs[w] != null ? costs[w] : 0) + graph[w][i];
                System.out.println(" на " + costs[i] + " для вершины " + i);
                parents[i] = w;
            }
        }
        System.out.println("Вершина с минимальным весом: " + min);
        visited.set(w);
        w = min;
    } while (w != null);

    System.out.println(Arrays.toString(costs));
    printPath(parents, 3);
}

public void printPath(Integer[] parents, int target) {
    Integer parent = target;
    do {
        System.out.print(parent + " <- ");
        parent = parents[parent];
    } while (parent != null);
}
The book breaks it down quite step by step. If you add an article on Habré on the Internet + look at the code, you can remember it. I found the step-by-step analysis a bit cluttered. But for the step-by-step nature itself it’s a plus. Overall, ok, although it could have been better)

Greedy algorithms

The next section is devoted to “greedy algorithms”. This section is interesting because it uses sets (java.util.Set). Finally we can see why it might be needed. We use a list of states as input:
Set<String> statesNeeded = new HashSet();
statesNeeded.addAll(Arrays.asList("mt", "wa", "or", "id", "nv", "ut", "ca", "az" ));
And also a list of radio stations covering some of these states:
Map<String, Set<String>> stations = new HashMap<>();
stations.put("kone", new HashSet(Arrays.asList("id", "nv", "ut")));
stations.put("ktwo", new HashSet(Arrays.asList("wa", "id", "mt")));
stations.put("kthree", new HashSet(Arrays.asList("or", "nv", "ca")));
stations.put("kfour", new HashSet(Arrays.asList("nv", "ut")));
stations.put("kfive", new HashSet(Arrays.asList("ca", "az")));
The book goes on to point out and explain the algorithm itself:
Set<String> finalStations = new HashSet();
while (!statesNeeded.isEmpty()) {
    String bestStation = null;
    Set<String> statesCovered = new HashSet();
    for (String station: stations.keySet()) {
        Set covered = new HashSet(statesNeeded);
        covered.retainAll(stations.get(station));
        if (covered.size() > statesCovered.size()) {
           bestStation = station;
           statesCovered = covered;
        }
    }
    statesNeeded.removeAll(statesCovered);
    finalStations.add(bestStation);
}
System.out.println(finalStations);

Dynamic programming

The book also describes problems to which an approach called “dynamic programming” is applied. The task is given:
“Grocking Algorithms” or a Painless Introduction to Algorithms - 9
We have a 4 lb bag. You need to find the most profitable items for a given weight. First, let's make a list of items:
List<Thing> things = new ArrayList<>();
things.add(new Thing("guitar", 1, 1500));
things.add(new Thing("tape recorder", 4, 3000));
things.add(new Thing("notebook", 3, 2000));
Now the algorithm itself:
int bagSize = 4;
int cell[][] = new int[things.size()][bagSize];
// Заполняем первую строку без условий
for (int i = 0; i < bagSize; i++) {
    cell[0][i] = things.get(0).cost;
}
// Заполняем оставшиеся
for (int i = 1; i < cell.length; i++) {
    for (int j = 0; j < cell[i].length; j++) {
        // Если вещь не влезает - берём прошлый максимум
        if (things.get(i).weight > j+1) {
            cell[i][j] = cell[i - 1][j];
        } else {
            // Иначе текущая стоимость + предыдущий максимум оставшегося размера
            cell[i][j] = things.get(i).cost;
            if (j + 1 - things.get(i).weight > 0) {
                cell[i][j] += cell[i-1][j + 1 - things.get(i).weight];
            }
        }
    }
}
System.out.println(Arrays.deepToString(cell));
There is also an interesting task to find the most similar words. Interesting, isn't it? More details here: LongestCommonSubsequence.java

Search for nearest neighbors

The book also very clearly talks about the k-nearest neighbors algorithm:
“Grocking Algorithms” or a Painless Introduction to Algorithms - 10
And the formula for calculation is given:
“Grocking Algorithms” or a Painless Introduction to Algorithms - 11

Bottom line

The book ends with an interesting "What's Next?" section, which provides a quick overview of interesting algorithms. Here is a brief description of what the meaning of trees and other algorithms is. Overall, I liked the book. It should not be taken seriously as some kind of comprehensive information. You will have to search and understand for yourself. But as introductory information to interest and give an initial idea, it’s quite good. Yes, the code in the book is written in Python. So all of the above examples are compilable) I hope this review will help you get an idea of ​​what the book contains and whether it’s worth buying.

Additionally

You can also check out the following resources on this topic:
  1. EdX - Introduction to Java Programming: Fundamental Data Structures and Algorithms
  2. LinkedIn - Introduction to Data Structures & Algorithms in Java (paid)
  3. 27 sites with puzzles to sharpen your programming skills
  4. Java CodingBat
  5. Tasks for programmers, answers to tasks of varying complexity
#Viacheslav
Comments
  • Popular
  • New
  • Old
You must be signed in to leave a comment
This page doesn't have any comments yet