# Title address (interview question 17.23. Maximum black square matrix)

https://leetcode-cn.com/probl…

## Title Description

```
Given a square matrix, each unit (pixel) is either black or white. An algorithm is designed to find the largest sub matrix with four edges of black pixels.
Returns an array [R, C, size], where r, c Represents the row number and column number in the upper left corner of the sub square matrix respectively. Size is the side length of the sub square matrix. If there are multiple sub matrices that meet the conditions, return the one with the smallest R. if R is the same, return the sub matrix with the smallest C. If there is no sub matrix satisfying the condition, an empty array is returned.
Example 1:
Input:
[
[1,0,1],
[0,0,1],
[0,0,1]
]
Output: [1,0,2]
Explanation: in the input, 0 represents black and 1 represents white. The elements marked with bold are the largest sub matrix that meets the conditions
Example 2:
Input:
[
[0,1,1],
[1,0,1],
[1,1,0]
]
Output: [0,0,1]
Tips:
matrix.length == matrix[0].length <= 200
```

## Pre knowledge

## company

- Not yet

## thinking

After looking at the data range, the size of the matrix does not exceed $200 \ times 200 $, so the answer should be violence. The complexity of this data range is almost the third power of N, where n is the side length of the matrix. The reason is also in the previous articleLet’s talk about how I brush the questions (the third play)As mentioned in, that is $200 ^ 3, which is just 8 million. It’s easy to exceed more**10 million**Yes.

At first glance, this problem is similar to221. Maximum square, it’s not. This problem can be hollow, as long as the side length is 0, which is completely different.

As shown in the figure below, the red part is the answer. Just make sure that all the edges are 0, so it doesn’t matter if there is a 1 in it.

We might as well start with the part to see if we can open our mind.

This is a common skill. When you are faced with a difficult problem, you can draw a diagram to help us open up our ideas from special and local situations.

For example, I want to calculate the largest black square matrix with the red grid as the lower right corner as shown in the figure below. We can expand from the current point up to the left until it is not 0.

In the above example, it is not difficult to see that the maximum black square matrix will not exceed min (4,5).

So the answer is 4? Yes for this case, but there are other cases. For example:

Therefore, although the upper bound of the solution space is 4, the lower bound may still be 1.

The lower bound is 1 because we are only interested in the lattice with value 0. If the lattice is 0, the worst and largest matrix is itself.

Above, I have locked the algorithm into violent solution. What about the violent solution of solution space? Nothing more than**Enumerate all solution spaces and subtract branches at most**。

To put it bluntly:

- 1 is that ok?
- 2 is that ok?
- 3 is that ok?
- 4 is that ok?

This is called special.

If you have understood this special situation, the general situation is not difficult.

Algorithm description:

- Scan the matrix from left to right and from top to bottom.
- If the lattice value is 0, scan up and left respectively until the first lattice that is not 0, then the upper bound of the maximum square matrix is min (the length extending to the left and the length extending upward).
- Gradually try [1, upper bound] until the square matrix cannot be formed. The last feasible square matrix is the largest square matrix that can be formed at the current point.
- Maintain the maximum value during scanning, and finally return the maximum value and vertex information.

Now there is only the third part of the difficulty**How to gradually try [1, upper bound]**。 In fact, this is not difficult, just:

- Probe up while extending to the left
- Probe to the left while extending upward

It may be easier to understand by looking at the figure below.

As shown in the figure above, try whether 2 is feasible. If it is feasible, let’s continue**be insatiable**Until infeasible or to the upper bound.

Next, analyze the time complexity of the algorithm.

- Since each point with 0 needs to probe left and up, the worst is O (n), where n is the side length.
- It is necessary to continue detection while moving to the left and up. The worst case of this complexity is O (n).

Since we need to execute the above logic for up to $o (n ^ 2) $points, the total time complexity is $$o (n ^ 4)$$

In fact, each lattice is an independent subproblem, so a memo (such as hash table) can be used to save the expansion results of each lattice, which can optimize the complexity to $o (n ^ 3) $.

For example, in the process of upward and left detection mentioned above, if the expansion results of the upper and left grids have been calculated, it can be used directly. The complexity of this extension can be reduced to $o (1) $. Therefore, it is not difficult to see that the calculation of the current grid depends on the left and upper grids, so it is used**Scan the matrix from left to right and from top to bottom**Is the right choice, because we need to traverse the current grid**The results of the left and upper grids have been calculated**。

- (4,5) find the adjacent grid above. If it is 1, return directly.
- If the value of the upper grid is 0, query in memo.
- Memo returns the result. We just need to add + 1 to the result, which is the maximum length that can be extended upward.

For example, now we need to calculate the side length of the largest square matrix with coordinates (4,5) as the lower right corner. The first step is to detect (3,5) upward. After reaching (3,5), you do not need to continue to extend upward, but get it from the memo. The grid of 0 above (4,5) is the number of grids above (3,5) + 1.

Last question. What data structure can realize the above query process $o (1) $time? HashMap can be used, as can arrays.

- The advantage of using hash map is to open up space in advance. The disadvantage is that if the amount of data is too large, it may timeout due to the conflict processing of the hash table. For example, it is easy to time out when using hash table storage in stone games.
- The advantages and disadvantages of using arrays are almost the opposite of hash tables. The array needs to achieve the specified size, but the array will not conflict and does not need to calculate the hash key, so the performance is better in many cases. Further use of array, a memory continuous data structure, is CPU friendly, so the same complexity will be faster. The hash table uses a linked list or tree, so it is not so friendly to CPU cache.

To sum up, I recommend that you use arrays to store.

That’s about it. In fact, this is dynamic programming optimization. In fact, it is not magical. It is often**Violence enumeration + memorization**nothing more.

## code

Code support: Java, Python

Java Code:

```
class Solution {
public int[] findSquare(int[][] matrix) {
int [] res = new int [0];
int [][][] dp = new int [2][matrix.length+1][matrix[0].length+1];
int max = 0
for(int i=1;i<=matrix.length;i++){
for(int j=1;j<=matrix[0].length;j++){
if(matrix[i-1][j-1]==0){
dp[0][i][j] = dp[0][i-1][j]+1;
dp[1][i][j] = dp[1][i][j-1]+1;
int bound = Math.min(dp[0][i][j], dp[1][i][j]);
for(int k=0;k<bound;k++){
if(dp[1][i-k][j]>=k+1&&dp[0][i][j-k]>=k+1){
if(k+1>max){
res = new int [3];
max = k+1;
res[0] = i-k-1;
res[1] = j-k-1;
res[2] = max;
}
}
}
}
}
}
return res;
}
}
```

Python Code:

```
class Solution:
def findSquare(self, matrix: List[List[int]]) -> List[int]:
n = len(matrix)
dp = [[[0, 0] for _ in range(n + 1)] for _ in range(n + 1)]
ans = []
for i in range(1, n + 1):
for j in range(1, n + 1):
if matrix[i - 1][j - 1] == 0:
dp[i][j][0] = dp[i-1][j][0] + 1
dp[i][j][1] = dp[i][j-1][1] + 1
upper = min(dp[i][j][0], dp[i][j][1])
for k in range(upper):
if min(dp[i-k][j][1], dp[i][j-k][0]) >= k + 1:
if not ans or k + 1 > ans[2]:
ans = [i-k-1, j-k-1, k + 1]
return ans
```

**Complexity analysis**

- Time complexity: $o (n ^ 3) $, where n is the edge length of the matrix.
- Space complexity: the bottleneck of space is memo, and the size of memo is the size of the matrix. Therefore, the space complexity is $o (n ^ 2) $, where n is the side length of the matrix.

The above is the whole content of this article. If you have any opinions on this, please leave me a message. I will check and answer one by one when I have time. For more algorithm routines, please visit my leetcode problem solving warehouse:https://github.com/azl3979858…。 It’s already 38K star. You can also focus on my official account, “Li Kaga”, which takes you to the hard nut of algorithm.