Spring Boot 2.x (16): Play Vue file upload

Time:2019-8-14

Why use Vue-Simple-Uploader

Recently, we used Vue + Spring Boot to upload files, stepped on some pits, compared some Vue components, and found a very useful component – Vue-Simple-Uploader.

Besides, why do you choose this component? Compared with the upload component of vue-ant-design and element-ui, it can do more things, such as:

  • Suspendable, Continuous Upload
  • Upload queue management to support maximum concurrent upload
  • Block Upload
  • Support schedule, estimated remaining time, error automatic retry, retransmit and other operations
  • Supporting “Fast Transfer” and judging whether the server exists through files to realize “Fast Transfer”

Because of the need for breakpoint continuation in demand, this component is chosen. I will start with the most basic upload.

Single file upload, multi-file upload, folder upload

Vue code:

<uploader
        :options="uploadOptions1"
        :autoStart="true"
        class="uploader-app"
      >
        <uploader-unsupport></uploader-unsupport>
        <uploader-drop>
          <uploader-btn style="margin-right:20px;":attrs="attrs">select file </uploader-btn>
          <uploader-btn: attrs= "attrs" Directory > select Folder </uploader-btn>.
        </uploader-drop>
        <uploader-list></uploader-list>
</uploader>

This component supports multiple file uploads by default. Here we paste this code from the official demo and thenuploadOption1The path of upload can be configured in uploader-btn, where directory property can be set to select folders for upload.

uploadOption1:

uploadOptions1: {
        Target: "//localhost:18080/api/upload/single,//upload interface"
        TesChunks: false, // whether to turn on server fragmentation checking
        FileParameterName:'file', // Default File Parameter Name
        headers: {},
        query() {},
        CatgaryMap: {// Types used to restrict uploads
          image: ["gif", "jpg", "jpeg", "png", "bmp"]
        }
}

For the convenience of programming the interface in the background, we define a chunk class which is used to receive some parameters of the default transmission of the component, which are convenient for block breakpoint continuation.

Chunk class

@Data
public class Chunk implements Serializable {
    
    private static final long serialVersionUID = 7073871700302406420L;

    private Long id;
    /**
     * Current file block, starting from 1
     */
    private Integer chunkNumber;
    /**
     * Block size
     */
    private Long chunkSize;
    /**
     * Current block size
     */
    private Long currentChunkSize;
    /**
     * Total size
     */
    private Long totalSize;
    /**
     * Document identification
     */
    private String identifier;
    /**
     * File name
     */
    private String filename;
    /**
     * Relative Path
     */
    private String relativePath;
    /**
     * Total Block Number
     */
    private Integer totalChunks;
    /**
     * File type
     */
    private String type;

    /**
     * Documents to be uploaded
     */
    private MultipartFile file;
}

When we write the interface, we can use this class as a parameter to receive the parameters from vue-simple-uploader directly. Note that POST is used here to receive the parameters.~

Interface method:

@PostMapping("single")
    public void singleUpload(Chunk chunk) {
                    // Get the incoming file
        MultipartFile file = chunk.getFile();
        // Get the filename
        String filename = chunk.getFilename();
        try {
            // Get the content of the file
            byte[] bytes = file.getBytes();
            // SINGLE_UPLOADER is a path constant I defined, which means that if there is no directory, create it.
            if (!Files.isWritable(Paths.get(SINGLE_FOLDER))) {
                Files.createDirectories(Paths.get(SINGLE_FOLDER));
            }
            // Get the path of the uploaded file
            Path path = Paths.get(SINGLE_FOLDER,filename);
            // Write bytes to the file
            Files.write(path, bytes);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

One thing to note here is that Spring Boot background will report errors if the file is too large.

org.apache.tomcat.util.http.fileupload.FileUploadBase$FileSizeLimitExceededException: The field file exceeds its maximum permitted size of 1048576 bytes.

At this point, it is necessary toapplication.ymlMaximum received file size (default sizes are 1MB and 10MB) for configuring servlets in

spring:
  servlet:
    multipart:
      max-file-size: 10MB 
      max-request-size: 100MB

Next we start the project, choose the files that need to be uploaded to see the effect ~is it convenient ~But the same thing can basically be done by the rest of the components, the reason for choosing this is more because it can support breakpoint block upload, realize the process of uploading network disconnection, once again networking can be from The breakpoint position begins to pass in seconds. ~Let’s see how breakpoint continuation works.

Breakpoint Block Continuation

First, let’s talk about the general principle of block breakpoint continuation. We can configure the size of blocks in components. Files larger than this value will be divided into several blocks to upload. At the same time, the blocks will be uploaded.chunkNumberSave to database(Mysql or RedisHere I chooseRedis

Components are uploaded with oneidentifierThe parameters (I use the default values here, and you can also reassign the parameters by generating md5). TheidentifierAct asRedisSet hashKey to”chunkNumber“Value is uploaded every timechunkNumberComposition of aSetCollection.

Will beuploadOptionMediumtestChunkThe value is set totrueAfter that, the component will start with onegetRequest, get the uploaded chunkNumber collection, and thencheckChunkUploadedByResponseMethod to determine whether the fragment exists to skip and sendpostRequest to upload partitioned files.

Each time a fragment is uploaded, the service layer returns the current set size and compares it with the total Chunks in the parameter. If it finds an equality, it returns a state value to control the front-end sending.mergeRequest that the blocks just uploaded be merged into one file, and the breakpoint block upload of this file is completed.

Spring Boot 2.x (16): Play Vue file upload

Here is the corresponding code~

Vue code:

<uploader
        :options="uploadOptions2"
        :autoStart="true"
        :files="files"
        @file-added="onFileAdded2"
        @file-success="onFileSuccess2"
        @file-progress="onFileProgress2"
        @file-error="onFileError2"
      >
        <uploader-unsupport></uploader-unsupport>
        <uploader-drop>
          <uploader-btn:attrs="attrs">block upload </uploader-btn>
        </uploader-drop>
        <uploader-list></uploader-list>
</uploader>

Verify uploaded code

uploadOptions2: {
        target: "//localhost:18080/api/upload/chunk",
        chunkSize: 1 * 1024 * 1024,
        testChunks: true,
        checkChunkUploadedByResponse: function(chunk, message) {
          let objMessage = JSON.parse(message);
              // Get the current collection of upload blocks
          let chunkNumbers = objMessage.chunkNumbers;
          // Determine whether the current block is included in the set to determine whether skipping is necessary
          return (chunkNumbers || []).indexOf(chunk.offset + 1) >= 0;
        },
        headers: {},
        query() {},
        categaryMap: {
          image: ["gif", "jpg", "jpeg", "png", "bmp"],
          zip: ["zip"],
          document: ["csv"]
        }
}

Successful processing after uploading, judging the status for merge operation

onFileSuccess2(rootFile, file, response, chunk) {
      let res = JSON.parse(response);
          // Backstage error reporting
      if (res.code == 1) {
        return;
      }
      // Need to merge
      if (res.code == 205) {
        // Send a merge request with identifier and filename parameters, which should correspond to the parameter name in the background Chunk class, otherwise it will not be received.~
        const formData = new FormData();
        formData.append("identifier", file.uniqueIdentifier);
        formData.append("filename", file.name);
        merge(formData).then(response => {});
      } 
    },

Decide whether the code exists, note that here is the GET request!!!

                    @GetMapping("chunk")
    public Map<String, Object> checkChunks(Chunk chunk) {
        return uploadService.checkChunkExits(chunk);
    }

    @Override
    public Map<String, Object> checkChunkExits(Chunk chunk) {
        Map<String, Object> res = new HashMap<>();
        String identifier = chunk.getIdentifier();
        if (redisDao.existsKey(identifier)) {
            Set<Integer> chunkNumbers = (Set<Integer>) redisDao.hmGet(identifier, "chunkNumberList");
            res.put("chunkNumbers",chunkNumbers);
        }
        return res;
    }

Save the blocks and save the data to Redis code. Here is the POST request!!!

@PostMapping("chunk")    
                public Map<String, Object> saveChunk(Chunk chunk) {
        // The operation here is basically the same as the saving slip paragraph.~
        MultipartFile file = chunk.getFile();
        Integer chunkNumber = chunk.getChunkNumber();
        String identifier = chunk.getIdentifier();
        byte[] bytes;
        try {
            bytes = file.getBytes();
            // The difference here is that there is a save block where the file name is saved according to - chunkNumber.
            Path path = Paths.get(generatePath(CHUNK_FOLDER, chunk));
            Files.write(path, bytes);
        } catch (IOException e) {
            e.printStackTrace();
        }
                    // What is done here is to save to redis and return the size of the collection
        Integer chunks = uploadService.saveChunk(chunkNumber, identifier);
        Map<String, Object> result = new HashMap<>();
        // If the size of the set is equal to that of total Chunks, it is determined that the block has been uploaded and merge operation is performed.
        if (chunks.equals(chunk.getTotalChunks())) {
            Result. put ("message", "successful upload! "";
            result.put("code", 205);
        }
        return result;
    }


        /**
         * File Path Generating Blocks
         */
        private static String generatePath(String uploadFolder, Chunk chunk) {
        StringBuilder sb = new StringBuilder();
        // Path of Stitching Upload
        sb.append(uploadFolder).append(File.separator).append(chunk.getIdentifier());
        // Determine whether the uploadFolder / identifier path exists or not, and create it if it does not exist
        if (!Files.isWritable(Paths.get(sb.toString()))) {
            try {
                Files.createDirectories(Paths.get(sb.toString()));
            } catch (IOException e) {
                log.error(e.getMessage(), e);
            }
        }
        // Returns a partitioned file in - isolation, followed by chunkNumber to facilitate subsequent sorting for merge
        return sb.append(File.separator)
                .append(chunk.getFilename())
                .append("-")
                .append(chunk.getChunkNumber()).toString();

    }

    /**
     * Save information to Redis
     */
        public Integer saveChunk(Integer chunkNumber, String identifier) {
        // Get the current chunkList
        Set<Integer> oldChunkNumber = (Set<Integer>) redisDao.hmGet(identifier, "chunkNumberList");
        // If the acquisition is empty, create a new Set collection and add the chunkNumber of the current block to Redis
        if (Objects.isNull(oldChunkNumber)) {
            Set<Integer> newChunkNumber = new HashSet<>();
            newChunkNumber.add(chunkNumber);
            redisDao.hmSet(identifier, "chunkNumberList", newChunkNumber);
            // Returns the size of the collection
            return newChunkNumber.size();
        } else {
                // If it is not empty, add the chunkNumber of the current block to the current chunkList and store it in Redis
            oldChunkNumber.add(chunkNumber);
            redisDao.hmSet(identifier, "chunkNumberList", oldChunkNumber);
            // Returns the size of the collection
            return oldChunkNumber.size();
        }

    }

Merged background code:

@PostMapping("merge")
    public void mergeChunks(Chunk chunk) {
        String fileName = chunk.getFilename();
        uploadService.mergeFile(fileName,CHUNK_FOLDER + File.separator + chunk.getIdentifier());
    }

        @Override
    public void mergeFile(String fileName, String chunkFolder) {
        try {
            // If the merged path does not exist, a new one is created
            if (!Files.isWritable(Paths.get(mergeFolder))) {
                Files.createDirectories(Paths.get(mergeFolder));
            }
            // Merged filenames
            String target = mergeFolder + File.separator + fileName;
            // Create files
            Files.createFile(Paths.get(target));
            // Traverse the partitioned folders, filter and sort them, and write them to the merged files in an additional way.
            Files.list(Paths.get(chunkFolder))
                     // Filter files with "-"
                    .filter(path -> path.getFileName().toString().contains("-"))
                     // Sort from small to large
                    .sorted((o1, o2) -> {
                        String p1 = o1.getFileName().toString();
                        String p2 = o2.getFileName().toString();
                        int i1 = p1.lastIndexOf("-");
                        int i2 = p2.lastIndexOf("-");
                        return Integer.valueOf(p2.substring(i2)).compareTo(Integer.valueOf(p1.substring(i1)));
                    })
                    .forEach(path -> {
                        try {
                            // Write files as additions
                            Files.write(Paths.get(target), Files.readAllBytes(path), StandardOpenOption.APPEND);
                            // Delete the block after merging
                            Files.delete(path);
                        } catch (IOException e) {
                            e.printStackTrace();
                        }
                    });
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

So far, our breakpoint sequel is over perfectly. I have uploaded the complete code to gayhub. Welcome to star fork pr. (I will also upload my blog to gayhub later.)

Front end: https://github.com/viyog/vibo…

Background: https://github.com/viyog/viboot

Public Number

Spring Boot 2.x (16): Play Vue file upload