PDO operation big data object


Generally, in the database, we only save int and varchar type data. First, modern relational databases will have a lot of optimization for these contents. Second, most indexes cannot be applied to fields with too much content. For example, text type words are not suitable for creating indexes. Therefore, when we use the database, we rarely store large content fields in the database. However, MySQL actually prepares this type of storage for us, but we don’t usually use it much. Today, let’s learn how to operate big data objects in MySQL using PDO.

What are big data objects

“Big” usually means “about 4KB or more”, although some databases can easily process up to 32KB of data before the data reaches “big”. Large objects may be text or binary in nature. We use PDO:: param in pdostatement:: bindparam() or pdostatement:: bindcolumn() calls_ Lob type codes allow PDOS to use big data types. PDO::PARAM_ Lob tells PDO to map data as a stream so that it can operate using the PHP streams API.

For MySQL, setting the field type to blob is a field in large object format. When bindparam() or bindcolumn(), the parameter of the specified field is PDO:: param_ Lob type, you can directly obtain the contents of this object in the form of a handle, and continue to operate on it like fopen ().

CREATE TABLE `zy_blob` (
  `attach` longblob,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin;

This is a data table for our test. Set the attach field to longblob type, that is, a larger blob type, so that we can store more information. After all, the current pictures or files can easily start with a few m or tens of M. we directly use the largest blob type for simple testing. The size of tinyblob is 255 bytes, the size of blob type is 65K, the size of medium blob is 16m, and the size of long blob is 4G.

What happens when you directly manipulate big data objects?

Let’s simply operate the big data object directly to see what the result is.

$stmt = $pdo->prepare("insert into zy_blob (attach) values (?)");
$fp = fopen('4960364865db53dcb33bcf.rar', 'rb');

$stmt = $pdo->query("select attach from zy_blob where id=1");
$file = $stmt->fetch(PDO::FETCH_ASSOC);
// Array
// (
//     [attach] => Resource id #6
// )

In this code, we do not bind the field, and then directly store the file opened by fopen () in the BLOB field. It can be seen that in the database, blob related fields only store strings such as resource ID #6. In other words, without any processing, the $FP handle is forcibly converted to a string type, and the result of the handle type being forcibly converted is that only one resource ID will be output, and the blob only records the string like the field of character type.

Correct posture

Next, let’s look at the correct posture, that is, insert data through bindparam() and read data through bindcolumn().

$stmt = $pdo->prepare("insert into zy_blob (attach) values (?)");

$fp = fopen('4960364865db53dcb33bcf.rar', 'rb');

$stmt->bindParam(1, $fp, PDO::PARAM_LOB); //  The binding parameter type is PDO:: param_ LOB

$stmt = $pdo->prepare("select attach from zy_blob where id=2");
// // $file = $stmt->fetch(PDO::FETCH_ASSOC);
//// print_ r($file); //  Empty
$stmt->bindColumn(1, $file, PDO::PARAM_LOB); //  Bind a column to a PHP variable
$stmt->fetch(PDO::FETCH_BOUND); //  Specify the retrieval method, return true, and assign the column value in the result set to the PHP variable bound through the pdostatement:: bindparam() or pdostatement:: bindcolumn() method
print_ r($file); //  Binary garbled content
$fp = fopen('a.rar', 'wb');
fwrite($fp, $file);

First, we bind the data through bindparam() and specify PDO:: param_ After the lob type, the binary contents of the file handle are normally inserted into the database. Next, we use bindcolumn () and also specify PDO:: param_ Lob type to obtain the queried data. Print the queried field information directly, and you can see that it is binary type content. Finally, we save the binary content as a file with another name.

You can replace the contents of the above file, and then execute the code to see if the final generated file is the same as the original file. I use a compressed package file here. The final generated a.rar file is exactly the same size as the original file and the extracted content.


What is the operation of big data objects? It’s actually a big file we usually want to keep. We read these files into the program in the form of binary stream, and then save them in the fields of the database. Think about the most commonly used image preservation in our development. You can use this to do it. However, we can focus here. We prefer to save files directly in the file directory, and only save their paths in the database. Database resources are valuable. The larger the table, the less conducive to optimization. Moreover, the database itself has a cache mechanism. Wasting its resources to save such large files is actually not worth the loss. Of course, if there are some special needs, such as some private files do not want to be saved directly in the hard disk file directory, or as a temporary cross server storage scheme, it is OK.

In modern development, I believe your company will not be too stingy to buy a cloud storage (qiniu, upyun, alicloud OSS). They can not only be used as a memory and network disk, but also have more functions, such as image cutting, watermark, complimentary CDN, bandwidth and traffic. In short, modern storage should be put on the cloud as much as possible. Even if it is developed by individuals, many manufacturers will provide free use in the case of small traffic and small amount of data, This is much more convenient than us.

Test code:


Reference documents:


Official account: hard core project manager

Add wechat / QQ friends: [xiaoyuezigonggong / 149844827] get free PHP and project management learning materials

Tiktok, official account, voice, headline search, hard core project manager.

Station B ID: 482780532

Recommended Today

Apache sqoop

Source: dark horse big data 1.png From the standpoint of Apache, data flow can be divided into data import and export: Import: data import. RDBMS—–>Hadoop Export: data export. Hadoop—->RDBMS 1.2 sqoop installation The prerequisite for installing sqoop is that you already have a Java and Hadoop environment. Latest stable version: 1.4.6 Download the sqoop installation […]