Overtime Murder Caused by small paging


Overtime Murder Caused by small paging

problem analysis

Through the above dialogue, as a programmer, have you ever encountered such a problem? Traditional and online is full of this kind of way, the client constantly updates PageSize and PageIndex parameters according to their own scrolling, and then uploads to the server interface to obtain data, and the network rarely explains whether there is a problem with this way, so is there a problem?

When it comes to pagination, no matter how the program is written, the core action of pagination is to get a piece of data according to the start position and end position. No matter how complex your sorting rules are, the ultimate goal is always to get a continuous piece of data in the total list data. Whether you use SQL statement pagination directly or search engine (e.g. es), the final effect on the client is the data display on the next page.

Of course, there are many styles of interaction reflected in the UI of the client
Overtime Murder Caused by small paging

If it’s waterfall flow or app segment scrolling display, or other situations that don’t need the total number of data, Caicai thinks that the server should never query the total number of data. The exhibitor can use whether there is data in the next page as the basis for whether to continue to pull the data in the next page.

Topic regression, if the client needs paging based on PageSize and PageIndex parameters, is there any problem? Of course, if not, what’s the significance of Cai Cai’s writing this article? I’m not a programmer who likes bullshit~~

The problem is

Here we take the simplest and most basic SQL statement paging as an example, if the existing data in the database is


The sorting rule is in reverse order of size, that is, all the data lists are as follows:


If you are getting the second page data, PageSize is 2, PageIndex is 2, and the correct result is “5,4”. There is no doubt about this. In the case that the data has not changed, the correct result is true. If the data changes, if a new data 8 is added now, the list data will become


According to the above pagination principle, the data obtained on the second page will become “6,5”. If you are smart, do you find the problem? This may also be the reason why sister D causes overtime work.

Paging operations are based on dynamic data

solve the problem

The data source of paging operation is dynamic. Sometimes the changed part happens in the range of the data you get, and there will be data duplication or error. How to solve it?


As the demander and exhibitor of data, the client needs to remember the primary key list of the loaded data. If a piece of data has been displayed, it is necessary to determine whether to display it repeatedly according to the business requirements. In general, it needs to remove the duplicate.

If the amount of data is very large, the client maintenance of a data pool is not ideal

  1. The last data ID parameter lastid of the previous page is added to the paging interface parameter of the server, and the PageIndex parameter is removed. In most cases, the function of the PageIndex parameter in the server is to determine the starting point of the data. If lastid is used, the pageinde is not needed in many cases.
  2. The server cache all the data, so that the dynamic data is static in a certain period of time, but this is also a temporary solution.
  3. If there is no business requirement for sorting, the server can use sequential pagination to get the data in unchangeable data segments

It’s difficult for the server to make dynamic data static

Business party

No matter how the program is optimized, it can not change the nature that data is constantly changing. If the business party (product, operation) can accept the phenomenon that data can be repeated occasionally, it can greatly reduce the work of programmers.

Sometimes the data bug you think is not necessarily a major problem in other business departments

Overtime Murder Caused by small paging