Large file upload and breakpoint continuation control


Recently, I met a need to upload super large files. I investigated the slicing and segmenting upload function of qiniu and Tencent cloud, so I sorted out the implementation of related functions of front-end large file upload here.

In some businesses, large file upload is an important Interaction scenario, such as uploading large excel table data, uploading audio-visual files, etc. If the file volume is relatively large or the network conditions are poor, the upload time will be relatively long (more messages will be transmitted, and the probability of packet loss and retransmission is greater). Users can’t refresh the page and can only wait patiently for the request to be completed.

Starting from the way of file upload, sort out the idea of large file upload, and give the relevant example code. Because PHP has built-in convenient file splitting and splicing methods, the server code is written in PHP.

The relevant sample code of this article is located on GitHub, which is mainly for reference

Talk about big file upload

Large file cutting and uploading

Several ways of file upload

First, let’s take a look at several ways of file upload.

Common form upload

Using PHP to show regular form uploads is a good choice. First, build the form for file upload, and specify the submission content type of the form as enctype = “multipart / form data”, indicating that the form needs to upload binary data.

Large file upload and breakpoint continuation control

Then write index PHP upload file to receive code, use move_ uploaded_ File method (PHP method is good…)

When a form uploads a large file, it is easy to encounter the problem of server timeout. Through XHR, the front end can also upload files asynchronously, which generally consists of two ideas.

File code upload

The first idea is to encode the file and then decode it at the server. I wrote a blog about image compression and upload at the front end. Its main implementation principle is to convert the image into Base64 for transmission

varimgURL = URL.createObjectURL(file);

ctx.drawImage(imgURL, 0, 0);

//Get the encoding of the picture, and then pass the picture as a long string

vardata= canvas.toDataURL( “image/jpeg”, 0.5);

The first thing to do is to decode the image on the server

$imgData = $_REQUEST[ ‘imgData’];

$base64 = explode( ‘,’, $imgData)[ 1];

$img = base64_decode($base64);

$url = ‘./test.jpg’;

if(file_put_contents($url, $img)) {

exit(json_encode( array(

url => $url



The disadvantage of Base64 coding is that its volume is larger than the original image (because Base64 converts three bytes into four bytes, the encoded text will be about one-third larger than the original text). For files with large volume, the upload and parsing time will be significantly increased.

For more information about Base64, you can refer to Base64 notes.

In addition to Base64 coding, you can also directly read the file content at the front end and upload it in binary format

//Read binary


vardata = newArrayBuffer(text.length);

varui8a = newUint8Array(data, 0);

for( vari = 0; i < text.length; i++){

ui8a[i] = (text.charCodeAt(i) & 0xff);




varreader = newFileReader;

reader. = function{

Readbinary (this. Result) / / read the result or upload it directly


//Put the file content read from input into the result field of FileReader


Formdata asynchronous upload

The formdata object is mainly used to assemble a set of key / value pairs for sending requests, which can send Ajax requests more flexibly. You can use formdata to simulate form submission.

letfiles = Files / / get the file object of input

letformData = newFormData;

formData.append( ‘file’, file);, formData);

The processing method of the server is basically the same as that of the direct form request.

Iframe no refresh page

On lower versions of browsers (such as IE), XHR does not support direct uploading of formdata, so you can only use form to upload files, and the form submission itself will jump the page, which is caused by the target attribute of the form, and its value is

_ Self, the default, opens the response page in the same window

_ Blank, open in a new window

_ Parent, open in parent window

_ Top, which opens in the topmost window

Framename, open in iframe with specified name

If you need to let users experience the feeling of uploading files asynchronously, you can specify iframe through framename. Set the target attribute of the form to an invisible iframe, and the returned data will be accepted by the iframe. Therefore, only the iframe will be refreshed. As for the returned result, it can also be obtained by parsing the text in the iframe.


varnow = + newDate

varid = ‘frame’+ now

$( “body”).append( “);

var$form = $( “#myForm”)


“action”: ‘/index.php’,

“method”: “post”,

“enctype”: “multipart/form-data”,

“encoding”: “multipart/form-data”,

“target”: id


$( “#”+id).on( “load”, function{

varcontent = $( this).contents.find( “body”).text


vardata = JSON.parse(content)

} catch(e){





Large file upload

Now let’s take a look at the timeout problems encountered in uploading large files in the above-mentioned upload methods,

Form upload and iframe non refresh page upload actually upload files through the form tag. In this way, the whole request is completely handed over to the browser for processing. When uploading large files, the request may timeout

Through fromdata, it actually encapsulates a set of request parameters in XHR to simulate form requests, which can not avoid the problem of large file upload timeout

Code upload, we can more flexibly control the uploaded content

The main problem of uploading large files is that in the same request, a large amount of data needs to be uploaded, resulting in a long process, and it needs to start uploading again after failure. Imagine that if we split this request into multiple requests, the time of each request will be shortened, and if a request fails, we only need to resend this request without starting from scratch. Can this solve the problem of large file upload?

Based on the above problems, it seems that the following requirements need to be met for large file upload

Support split upload request (i.e. slicing)

Support breakpoint continuation

Support to display upload progress and pause upload

Next, let’s implement these functions in turn. It seems that the main function should be slicing.

File slicing

Reference: large file cutting and uploading

In the encoding upload, at the front end, we only need to obtain the binary content of the file, then split its content, and finally upload each slice to the server.

In Java, file file object is a subclass of blob object. Blob object contains an important method slice. Through this method, we can split binary files.

The following is an example of splitting a file. For UP6, developers do not need to care about the details of splitting. They are implemented with the help of controls. Developers only need to care about business logic.

Large file upload and breakpoint continuation control

When the control is uploaded, relevant information will be added for each file block data, and the developer can process it by himself after receiving the data on the server.

Large file upload and breakpoint continuation control

After the server receives these slices, it can splice them together. The following is the sample code of PHP splicing slices

For UP6, developers do not need to splice. UP6 has provided sample code and implemented this logic.

Large file upload and breakpoint continuation control

To ensure uniqueness, the control will add information for each file block, such as block index, block MD5 and file MD5

Breakpoint continuation

UP6 has its own continuation function. UP6 has saved the file information on the server and the file progress information on the client. When uploading, the control will automatically load the file progress information, and developers don’t need to care about these details. In the processing logic of file block, it only needs to be identified according to the index of file block.

At this time, when uploading, refresh the page or close the browser. When uploading the same file again, the previously uploaded slices will not be uploaded again.

The logic of the server to realize breakpoint continuation is basically similar. As long as the query interface of the server is called inside getuploadslicerecord to obtain the records of uploaded slices, so it will not be expanded here.

In addition, the expiration of the slice needs to be considered in the continuous transmission of breakpoints: if the mkfile interface is called, the slice content on the disk can be cleared. If the client does not call the mkfile interface all the time, it is obviously unreliable to allow these slices to be saved on the disk all the time. Generally, the slice upload has a period of validity, and it will be cleared if it exceeds this period of validity. For the above reasons, the breakpoint continuation must also synchronize the implementation logic of slice expiration.

Continuation effect

Large file upload and breakpoint continuation control


Upload progress and pause

Through XHR The progress method in upload can monitor the upload progress of each slice.

The implementation of upload pause is also relatively simple, through XHR Abort can cancel the upload of currently uncompleted uploaded slices and realize the effect of upload pause. Resuming upload is similar to continuous transmission at breakpoint. First obtain the list of uploaded slices, and then resend the non uploaded slices.

Due to the space, the upload progress and pause functions will not be realized here.

Effect achieved:

Large file upload and breakpoint continuation control



At present, there are some mature big file upload solutions in the community, such as qiniu SDK and Tencent cloud SDK. We may not need to manually implement a simple big file upload library, but it is very necessary to understand its principle.

This paper first sorts out several ways of front-end file upload, and then discusses several scenarios of large file upload and several functions that need to be realized by large file upload

Split the file into slices through the slice method of blob object

Sorted out the conditions and parameters required by the server to restore the file, and demonstrated PHP to restore the slice to a file

By saving the records of uploaded slices, the continuous transmission of breakpoints can be realized

There are still some problems left, such as avoiding memory overflow when merging files, slice failure strategy, upload progress pause and other functions, which have not been deeply or implemented one by one. Continue to learn

The back-end code logic is mostly the same. At present, it can support mysql, Oracle and SQL. You need to configure the database before using it. You can refer to this article I wrote:Java HTTP big file breakpoint continuous upload – Zeyou software blog 
Welcome to join the group to discuss: 374992201

Recommended Today

Six paradigms of database design

Recently, I’m looking at database design ethics. By the way, let’s clarify the six paradigms of database design.First, let’s understand some concepts. Normal form (NF) The set of relationship patterns that conform to a certain level represents the rationalization degree of the relationship between the attributes within a relationship. In fact, you can roughly understand […]