002.00 picture processing pil.image

Time:2021-12-5

002.00 image processing

Filing date: August 10, 2019

Updated on: 2021 / 03 / 05 add im. N_ frames

Win 10 Python 3.7.4 PIL Pillow 6.1.0

Subject: 002.00 image processing

Coordinate system

  1. The upper left corner (x, y) is (0, 0), in fact (0.5, 0.5), and the horizontal right and vertical down are positive
  2. Most references to parameters are tuples, and the unit is pixel
  3. Point: (x, y), usually the default value is (0, 0)
  4. Dimension: (width, height)
  5. Area: (x1, Y1, X2, Y2), (x1, Y1) is the upper left corner, (X2, Y2) is the lower right corner. If there is only (x1, Y1) and no (X2, Y2), it is usually regarded as the lowest right corner of the picture; The default is the entire area

2. Relevant instructions

The following contents will refer to most of the relevant information on the use of image module in pil (Python imaging Library). This information is very important. Only when you are familiar with its contents can you better understand the application of function library / method library

  1. There are three ways to represent FP files

    • File name string of direct or relative path, such as: ‘pic1. JPG’, ‘. \ pic1. JPG’,’d: \ working. JPG ‘
    • File object with read () / seek () / tell () methods and opened in binary file mode (omitted)
    • Pathlib.path object (omitted)
  2. Format file type: there are different types of drawing files because of different storage methods or presentation methods. At present, BMP, DIB, GIF, JPEG, PNG, ppm, tiff are supported. Different file types or different image data require different processing methods. Therefore, the library provided by pilot cannot be processed in the same way. Therefore, there will be many so-called special cases in application, If you are not careful, you will make mistakes

  3. Extension file name: Currently, apng, BMP, DIB, GIF, JFIF, JPE, JPEG, JPG, PBM, PGM, PNG, PNM, ppm, TIF, tiff are supported

  4. Pictures can be basically divided into the following structures:

    • Frame: multi frame picture
    • Channel: the color of each point forms a layer (1 ~ 4)
    • Alpha channel (mask): the transparency of each point forms a mask
    • Palette: palette
  5. The vast majority of image processing can meet the general needs, and special requirements can also be designed and processed by ourselves

  6. f_ Mode: the open read / write mode of the file, such as’ R ‘,’ W ‘,’ B ‘,’t’,…, which is usually read-only as’ RB ‘and only written as’ WB’

  7. Imageobj: objects, related parameters, functions and methods defined by class image cover almost all the contents of the pilot

  8. For functions or methods, the called object may or may not change, and does not necessarily have a return value. Pay attention to the use. If the called object will be changed and needs to be retained, it must be copied in advance, and then called with the copied object. In addition, functions or methods without return value are assigned to new variables, and the result is none, so errors often occur

  9. Common parameters of functions and methods

    • Size: 2-tuple, size value (degree, height)

    • Mode: string, picture mode represents the storage mode of picture points. Pictures in different modes have different functions, methods and processing methods, which are not applicable. In use, you should first confirm the file type or picture mode, how many layers the picture content has, whether there is a shielding layer, a color palette, and how many frames there are. The common modes are listed below:

      • “1” – binary image layer 1 (L), 1 byte / pixel, only black and white, value 0 / 1
      • “L” – gray image layer 1 (L), 1 byte / pixel, only brightness, value 0 ~ 255
      • “P” – color image 1 layer (L), 1 byte / pixel, using a 256 color palette
      • “RGB” – color image, 3 layers (L), 3 bytes / pixel, true color
      • “RGBA” – color image, 4 layers (L), 4 bytes / pixel, using the true color of mask
      • “CMYK” – color image, 4 layers (L), 4 bytes / pixel, color separation
      • “YCbCr” – color image, 3 layers (L), 3 bytes / pixel, 3×8 dots, color video
      • “I” – color image layer 1 (I), 32-bit signed integer pixels
      • “F” – color image layer 1 (f), 32-bit floating-point pixels
      • “La” – color image 2 layers (L), L / alpha (omitted)
      • “Rgbx” – color image 4 layers (L), true color with fill
      • “RGBA” – color image 4 layers (L), true color with pre multiplied alpha
      • “Lab” – color image layer 3 (L), (omitted)
      • “HSV” – color image layer 3 (L), hue / Sat. / value or brightness (omitted)
    • Alpha: integer / float, transparency refers to the transparency of one picture, which affects the effect of overlapping with another picture (or background). 255 is completely opaque, 0 is completely transparent, and other values are calculated; It can also be represented by values from 0 to 1

    • Mask: the role of the imageobj single-layer mask is the same as alpha, but it should have the same size as the image to facilitate the transparency of each image point. Therefore, the mode can only be 1, l or RBGA

    • ImageObj_ 1, ImageObj_ 2. In case of equivalence processing, the size and mode must be the same

    • Function: when the function is used as a parameter, the result of the function is pre calculated, so the function with variable result, such as random number generator, cannot be used. (the system pre calculates the result of the function, establishes a 256 or 65536 table, and then obtains the result according to the point value of the graph.)

    • Box: 2-tuple or 4-tuple; Square area

    • xy: 2-tuple; Point coordinate value

    • Array: the object generated by NP. Asarray (IM). (omitted)

    • Data: object with unprocessed byte string or buffer

    • Band: layer number (0 ~ n), or layer name, such as “R”, “g”, “B”, default or none, which refers to all layers

    • Bands: tuple, tuple of single layer picture

    • Filter: imagefilter has linear, nonlinear, etc. the calculation methods are different. At present, the following ten methods are provided. When using, first import from PIL imagefilter, and the following string of numbers are the sampling template, specific gravity and offset value used in filter calculation

      • Imagefilter.blur: blur (5,5), (1,1,1,1,0,0,1,1,0,0,1,1,1,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,), 16,0
      • Imagefilter.contour: contour (3,3), (- 1, – 1, – 1, – 1,8, – 1, – 1, – 1, – 1,), 1255
      • Imagefilter.detail: detail (3,3), (0, – 1,0, – 1,10, – 1,0, – 1,0,), 6,0
      • ImageFilter.EDGE_ Enhance: marginalization (3,3), (- 1, – 1, – 1, – 1,10, – 1, – 1, – 1, – 1,), 2,0
      • ImageFilter.EDGE_ ENHANCE_ More: more marginal (3,3), (- 1, – 1, – 1, – 1,9, – 1, – 1, – 1, – 1,), 1,0
      • Imagefilter.boss: embossing (3,3), (- 1,0,0,0,1,0,0,0,), 1128
      • ImageFilter.FIND_ Edges: contouring (3,3), (- 1, – 1, – 1, – 1,8, – 1, – 1, – 1, – 1,), 1,0
      • Imagefilter.smooth: smoothing (3,3), (1,1,1,5,1,1,1,1,), 13,0
      • ImageFilter.SMOOTH_ More: smoother (5,5), (1,1,1,1,1,1,5,5,1,1,5,44,5,1,1,5,5,1,1,1,1,1,1,1,1,1,1,1,1,), 100,0
      • Imagefilter.sharpen: sharpen (3,3), (- 2, – 2, – 2,32, – 2, – 2, – 2, – 2, – 2,), 16,0
      • The above filters can be customized and processed differently
        • Parameter imagefilter. Kernel ((n, n), (NxN specific gravity parameters), scale, offset)
        • N = can only be 3 or 5; Scale calculates the specific gravity to be divided again, and offset is the offset to be added
        • Generally, scale is the sum of specific gravity parameters. If it is 0, it will be changed to 1
        • For example, boss is equivalent to kernel ((3,3), (1,0,0,0,1,0,0,0,0), 1128)
    • Palette: palette; In mode ‘p’, the real color of each point is represented by a palette;

      • The image.web palette has only six RGB color values: 00, 33, 66, 99, CC and FF, so there are a total of 6 * * 3 = 216 colors. (internal setting)
      • Image.adaptive is a series of colors extracted from all colors. It is the most used palette and can use the least colors to achieve better results.
    • Rawmode: palette mode, including L, La, RGB and RGBA

    • Color: single layer uses int or float to represent color, while multi layer uses tuple. Each layer has a color, which can also be represented in the following way:

      • RGB: “#RRGGBB”, “rgb(R, G, B)”, “rgb(R%, G%, B%)”
      • HSL: hue (0 ~ 360, 0 red, 120 green, 240 blue) / saturation (0% ~ 100%, 0% gray, 100% full color) / brightness (0% ~ 100%, 0% black, 50% normal, 100% white), “HSL (H%, s%, l%)”
      • HTML: there are 140 color names with the same case, such as red, white, cyan, silver, blue, gray, grey, darkblue, black, lightblue, orange, purple, brown, yellow, maroon, lime, green, magenta,…, etc
    • Resample filter method. When the two-dimensional relationship of the image changes, there is no point value of the new position, and some methods must be taken to obtain the new value

      • 0 – image.nearest: use the nearest point as the new value, which is the fastest, but zooming in and out is the worst
      • 1 – image.lanczos or image.antialias: use the truncated sinc function and all points that may contribute to the new point value to calculate the new point value. The slowest speed, the best amplification and the best reduction (can only be used in resize and thumbnail methods)
      • 2 – image.bilinear or linear: use the upper, lower, left and right points (2×2) to calculate the new point value linearly, which is fast, with poor amplification and reduction
      • 3 – image.bicubic or cubic: use the upper, lower, left and right sixteen points (4×4) to calculate the new point value to the third power. The speed is slow, the magnification is good, and the reduction is good
      • 4 – image.box: the speed is fast, the zoom in is not good, and the zoom out is poor
      • 5 – image.hammering: fast, zooming in is not good, zooming out is OK
      • Among them, antialias is more suitable for larger and smaller drawings, and bilinear / bicubic is more suitable for unchanged size or smaller and larger drawings
    • Transfer transpose method: mirror and rotate. The rotation parameters here cannot be used in rotate() because they are not angle values

      • 0 – Image.FLIP_ LEFT_ Right: opposite left and right
      • 1 – Image.FLIP_ TOP_ Bottom: upside down
      • 2 – Image.ROTATE_ 90: turn 90 degrees counterclockwise
      • 3 – Image.ROTATE_ 180: turn 180 degrees counterclockwise
      • 4 – Image.ROTATE_ 270: turn 270 degrees counterclockwise
      • 5 – Image.TRANSPOSE: ROTATE_90 + FLIP_TOP_BOTTOM
      • 6 – Image.TRANSVERSE: FLIP_TOP_BOTTOM+ ROTATE_90
    • Transforms conversion method:

      • 0 – image.fine: affine transformation, data: (a, B, C, D, e, f), new point value is (AX + by + C, DX + ey + F)
      • 1 – image.text: cut an area, data: (x0, Y0, Z1, Y1) corresponds to (0, 0, size_x, size_y)
      • 2 – image.perspective: perspective conversion data is (a, B, C, D, e, F, G, H), and the new point value is (AX + by + C) / (GX + hy + 1), (DX + ey + F) / (GX + hy + 1)
      • 3 – image.quad: quadrilateral mapping to rectangle data: (x0, Y0, x1, Y1, X2, Y2, X3, Y3) (upper left corner, lower left corner, lower right corner, upper right corner)
      • 4 – image.mesh: map multiple source quads in one operation
    • Quantify quantization method: when some color values need to be classified to reduce the number of colors, how to group them to achieve the proximity of the image

      • 0 – median cut algorithm, cut into two groups according to the middle value of the maximum range of all color values, and repeat the action until the required number of colors (groups)
      • 1 – maximum coverage (omitted)
      • 2 – fast octree fast octree color quantization (omitted)
      • 3 – libimagequant (omitted)
    • Dither color transfer method

      • 0 – image.nearest or none
      • 1 – image.ordered = 1 (the function has not been established)
      • 2 – image.rasterize = 2 (the function has not been established)
      • 3 – image.flowdsteinberg (default) (omitted)
    • Categories: (omitted)

    • The decoder is only valid for picture point data. If a complete picture is in a string, it must be processed by bytesio, and then opened by image. Open(), commonly used as’ raw ‘, to process those uncompressed file formats; The new format may need to build its own decoder

    • Im refers to the enabled or created imageobj object

    • Alpha transparency is calculated basically in the following three ways

      • Method 1 (alpha-1) is used in two pictures with different shielding

        • Suppose point 1 (R1, G1, B1, A1), point 2 (R2, G2, B2, A2), and new point (R3, B3, G3, A3)
        • factor_pixel = a2/(a2 + a1 * (1.0 – a2/255))
        • r3 = r2 * factor_pixel + r1 * (1 – factor_pixel)
        • factor_alpha = 1.0 – a2/255
        • a3 = a2 + a1 * factor_alpha
        • The calculation method of G3 and B3 is the same as that of R3
      • Method 2 (alpha-2) is used to process a single alpha value regardless of whether Fig. 1 and Fig. 2 are shielded or not

        • Assumption: point R1 in Figure 1, point R2 in Figure 2, new point pixel, alpha = 0 ~ 1
        • r3 = r1 * (1.0 – alpha) + r2 * alpha
        • The calculation method of G3 and B3 is the same as that of R3
      • Method 3 (Alpha-3)

        • Hypothesis: point R1 in Figure 1, point R2 in Figure 2, and shielded alpha and new pixel
        • r3 = r1 * (255- alpha) + r2 * alpha
        • The calculation method of G3 and B3 is the same as that of R3
    • Imagefilter filter

      • Kernel (size, kernel, scale = none, offset = 0): size (3,3) or (5,5); Kernel a string of 9 or 25 values; Scale division value (internally defined as the sum of kernels); Offset plus value
      • Rankfilter (size, rank): sort around (size, size), and take the rank value as the new value
      • Minfilter (size = 3): take points around (size, size). The smallest value is the new value
      • Medianfilter (size = 3): take points around (size, size), and the middle value is the new value
      • Maxfilter (size = 3): take points around (size, size). The maximum value is the new value
      • Modefilter (size = 3): (size, size). The most commonly used value is the new value, and the original value is used if there is no value

Common parameters

  • Image.info: dictionary; Some non – picture related data is not standardized, so it is useless
  • Image.format: string; Is the file format name, such as “BMP”. If it is not from the file, it is none
  • Image.mode: string; Picture format name, such as “RGB”, the string is grayscale picture (L), true color picture (RGB), prepress picture (CMYK)
  • Image.palette: class ImagePalette; ‘ The color palette table in ‘p’ mode. If it is not in ‘p’ mode, it is none
  • Image.size: 2-tuple; Picture size and height tuple
  • Image.width: int; Picture fidelity
  • Image.height: int; Picture height

Image processing

  • File related

    • File open image. Open (filename [, f_mode =’r ‘])
      • Return: ImageObj;
      • file_ Mode: “R” (actually “RB”, binary file read-only mode)
    • File close (IM. Close)
      • Return: none / exception / valueerror, call object: changed and will not exist
    • File save im. Save (FP, format = none, * * params)
  • Generate new graph directly

    • Create a new image. New (mode, size [, color = 0])
      • Return: ImageObj / IOError.
      • Color: the default is 0 (black), and none does not act
    • Copy picture im. Copy()
      • Return: ImageOBJ
    • Mandelbrot image.effect_ mandelbrot(size, extent, quality)
      • Return: ImageObj; A picture of Mandelbrot Garonne province
      • Ext: 4-tuple, refers to the range of C value (to be confirmed), reference value (- 2, – 2, 1, 1)
      • Quality: the larger the quality index, the finer. Of course, it is also related to the size of the graph
      • Mandelbrot Garonne complex set [Z, f (z), f (z)), f (f (z)),…], the elements in the set will not extend to infinity, will be in a disk with finite radius, and will not become infinity points, where f (z) = Z * * 2 + C
    • Gaussian noise image.effect_ noise(size, sigma)
      • Return: ImageObj
      • Sigma: integer or floating point number, which is the standard deviation of Gaussian noise. The larger the value, the more expensive the picture will be
    • Linear black-and-white gradient image.linear_ gradient(mode)
      • Return: ImageObj
      • 256×256 size, from top to bottom, from black to white, progressive picture
      • Mode can only be ‘l’ or ‘p’
      • Circular black-and-white gradient image.radial_ gradient(mode)
      • Return: ImageObj
      • 256×256 size, black inside and white outside, gradual picture
      • Mode can only be ‘l’ or ‘p’
    • Matrix to image. Fromarray (array, mode = none)
      • Return: ImageObj
      • If the mode is none, it is determined by the array type
      • Borrow frombuffer
    • Buffer to image. Frombuffer (mode, size, data [, decoder_name =’raw ‘, * args])
      • returnL ImageObj
      • If the mode is L / rgbx / RGBA / CMYK, the pictures will share memory with the buffer, and others will not
      • The parameter group is only used by the decoder. If the decoder is “raw”, a complete parameter group must be provided, otherwise an error may occur; The best is from buffer (mode, size, data, “raw”, mode, 0, 1)
      • Borrow image.frombytes
    • Byte to image. Frombytes (mode, size, data, decoder_name = “raw”, * args)
      • Return: ImageObj
      • It is best to use only the first three parameters: mode, size, data, and omit others
    • Take a single layer im.getchannel (channel)
      • Return: image with imageobj mode l
      • Channel: integer or string, such as RGBA, R layer is 0, or alpha layer ‘a’
    • Take all layers im. Split ()
      • Return: tuple picture layers
    • Multi layer to picture image. Merge (mode, bands)
      • Return: ImageObj
      • The number of picture layers in this mode must be equal to the number of bands
    • Screenshot crop ([box = none])
      • Return: ImageOBJ
      • Box: if omitted, it is the same as copy()
  • Picture overlap

    • Image.alpha_ composite(ImageObj_1, ImageObj_2)
      • Return: ImageObj; Using the mask / alpha calculation method, figure 2 is added to figure 1
      • ImageObj_ 1, ImageObj_ 2: It must be RGBA. The modes are different. You can directly use putalpha() to change the alpha value. It will also be changed to RGBA and then alpha_ composite().
      • Use mask / alpha calculation method
    • Single valued alpha graph iterator image. Blend (imageobj_1, imageobj_2, alpha)
      • Return: ImageObj; Use the fixed value alpha-2 calculation method
      • Alpha: integer or floating-point number from 0 to 1, 0 is imageobj_ 1, 1 is imageobj_ two
    • Image. Composite (imageobj _1, imageobj _2, mask)
      • Return: ImageObj; Figure 1 is superimposed on Figure 2
      • Borrow the method paste (image1, none, mask) and use Alpha-3 calculation method
      • This function algorithm and object image and alpha_ Composite () is different
    • Partition map iteration im.alpha_ composite(ImageObj, dest, source)
      • Return: None; If the calling object is mapped, the content will be changed
      • Dest: 2-tuple, the location coordinate value of the call object can be negative, which refers to the location where the map is pasted
      • Source: 2-tuple, screenshot location of imageobj, or 4-tuple screenshot area
      • Superposition method: alpha-1 calculation method
      • The mapped image can also be an integer or tuple with pixel values; If the mapped mode is different from the original mode, the system will convert it by itself; In addition to the image integration, the alpha layer will also be integrated on the original image
    • Map im.paste (imageobj | color, box = none, mask = none)
      • Return: none. Call object: changed. Map on the original image
      • In addition to the image integration, the alpha layer will also be integrated on the original image
      • Superposition method: use alpha – 3 calculation method
  • Image processing

    • Image.eval (imageobj, function)
      • Return: ImageObj
      • Borrow point ()
    • Image. Convert (mode = none, matrix = none, dither = none, palette = 0, colors = 256)
      • Return: ImageObj
      • Mode: target mode. When imageobj is in non-p mode, if mode is omitted, it will be copied directly without conversion; Otherwise, it is converted to RGB
      • Matrix: 4 / 12 float tuple, which can be omitted. It is only used for L and RGB modes
      • Dither: dither method used by RGB to P, RGB to 1, RGB to L (none or flowdsteinberg with internal h). If there is a matrix, this method will not be used; if dither is none, the method to convert black and white is 255 If pixel > 128 else 0
      • Colors: the number of colors used by the adaptive palette
      • Color to gray scale: l = R * 299 / 1000 + G * 587 / 1000 + b * 114 / 1000
      • Color / grayscale to black and white: convert to “1” using Floyd Steinberg dither
      • If dither is none, the value of > 0 will change to 255. Please use point() for different critical values. Dither is only used for RGB > P, RGB > 1, l > 1
      • If there is no palette, when you switch to p mode, alpha is set to 255 (opaque)
      • RGB to CMYK, (C, m, y) = 255 – (R, G, b), k = 0, the color will be a little distorted
      • RGB to YCbCr, (y, CB, Cr) = (0.257 * r + 0.504 * G + 0.098 * B + 16, – 0.148 * r-0.291 * G + 0.439 * B + 128, 0.439 * r-0.368 * g-0.071 * B + 128)
    • Filter image processing im. Filter (filter)
      • Return: ImageOBJ
    • Table graph point full processing im.point (LUT | function, mode = none)
      • Return: ImageObj
      • LUT: loopup table, table. A 256 or 65536 (in I or l mode) size table for a layer
      • Mode: the mode of the output diagram, which is only applicable to the mode conversion of L > 1, P > 1 or I > L
    • Add or modify im.putalpha (alpha | mask) in alpha layer
      • Return: none, if there is no alpha layer, the mode will be converted to LA or RGBA
      • RBGA mode only
    • Image single point modification im.putpixel (XY, value)
      • Return: None
      • Value: integer / multiple / RGB or RGA tuples (P mode)
      • Bit by bit, the speed will become very slow
    • Quantize into P picture im. Quantize (colors = 256, method = none, kmeans = 0, palette = none, dither = 1)
      • Return: ImageObj
      • Colors: tutple, the number of required colors, which must be less than 256
      • Method: integer. By default, if the RGBA mode is 2, otherwise it is 0. RBGA must use 2 or 3
      • Kmeans: integer, K method how many colors are used to represent the picture without giving color. (automatic grouping by distance)
      • Palette: imageobj, the palette to be quantified. The mode of the palette can only be p; The picture mode must be RGB or L. if there is a palette, call convert (“P”, dither, palette). If there is no palette, refer to colors, method and kmeans for quantization
      • Dither: integer, none or flowdsteinberg (default), only for conversion of RGB > P, RGB > 1 or l > 1
    • Im. Resize (size, resample = nearest, box = none)
      • Return: imageobj, pictures of different sizes
      • Resample: all six methods can be used. If the original drawing mode is 1 or P, it will be set to nearest
    • Rotate the picture im. Rotate counterclockwise (angle, resample = nearest, expand = 0, center = none, translate = none, fillcolor = none)
      • Return: Imageobj
      • Angle: counterclockwise rotation angle, in degrees. 0 degrees copy(); Similar to rotate_ 90 this parameter cannot be used because it only represents 2!
      • Resample: resampling filter (Lanczos 1, box 4, Hamming 5 are unavailable)
      • Expand: false / true, false, the picture size remains unchanged, and some contents will be cut off after rotation; True the picture will become larger and contain new pictures. The default is false
      • Center: 2-tuple, rotate the center point. The default is the center point of the picture
      • Translate: 2-tuple, displacement after rotation, default to (0, 0)
      • Fillcolor: the color value of integer (single layer picture) or tuple (multi-layer picture). The default value is 0 (black). It refers to the color filled in the extra part outside the picture
    • Change to thumbnail im. Thumbnail (size, resample = bicubic)
      • Return: none. The original picture will be changed to a thumbnail. To keep the original picture, please copy it and process it again
      • Size: 2-tuple, thumbnail size. In order to maintain the original scale, the result may be a little smaller than the required thumbnail size
      • Call image.draft to finish the work
    • “1” image to “X11 bitmap” im.tobitmap (name = “image”)
      • Return: string, X11 bitmap string
      • Name: leading string of bitmap variable. The default value is image
    • Mirror image im. Transform (size, method, data = none, resample = nearest, fill = 1, fillcolor = none)
      • Return: ImageObj
      • Data: data for method
      • Resample: nearest / bilinear / bicubic. If the default mode is 1 or P, it is nearest
      • Fill: data for imagetransformhandler
      • Fillcolor: fill in the excess part
    • Rotate or image image im. Transfer (method)
      • Return: ImagIbj
    • Remove the black box im. Getbbox()
      • Return: box. If it is all black, the return value is none
      • To get the picture, you can use crop ()
    • Transfer im.draft (mode, size)
      • Return: none, the original drawing is modified, and the size will be slightly different
      • Only JPEG and PCD can be used, and it is useless when loaded
  • Read data or information

    • Read each layer name image. Getmodebandnames (mode)
      • Return: tuple, the name of each layer
    • Read image basic mode image. Getmodebase (mode)
      • Return: “L” (gray scale picture) or “RGB” (color picture)
    • Read the number of layers image. Getmodelbands (mode)
      • Return: the number of layers in this mode
    • Read the single-layer mode image. Getmodeltype (mode)
      • Return: “L”, “I” or “F”
    • Read the legal satellite file name image.registered_ extensions()
      • Return: dictionary, such as {‘. BMP’:’bmp ‘,’. DIB ‘:’dib’,…, ‘. Apng’:’png ‘}
    • Read each layer name im. Getbands()
      • Return: tuple, such as RGB (“R”, “g”, “B”)
    • Count the amount of each color im. Getcolors (maxcolors = 256)
      • Return: list. Q the number of colors, m the number of layers, n the total number of color combinations of each layer, not necessarily in order
      • Single layer [[Q1, C1], [Q2, C2],…, [QN, CM]]
      • Multilayers [[Q1, (C11, C12,… C1M)], [Q1, (C21, C22,… C2m)],…, [Q1, (CM1, cm2,… CNM)]]
      • Maxcolors: if there are more than colors, return none. Multi layer colors are likely to exceed 256. For example, RGB may have 256x256x256 colors
    • Image to data im.getdata (Band = none)
      • Return: dataobj can only be used directly after special operations. It cannot be used as imageobj
      • Use list () instead of list
    • The image color takes the extreme value im. Getextreme()
      • Return: 2-tuple or N – (2-tuple), the former is the minimum value, and the latter is the maximum value
    • Take out the palette im. Getpalette()
      • Return: list, none if there is no palette
    • Take the drawing point value im.getpixel (XY)
      • Return: inter layer or tuple
    • Image color statistics im. Histogram (mask = none, extreme = none)
      • Return: tuple (statistical table of color values of each layer. The color values are counted from 0, and each layer is separated and followed
      • Mask: maskobj. Only non-zero data can be counted. If omitted, it is not needed. There can only be 1 and l modes
      • Extreme: 2-tuple, the minimum value and maximum value of color. The color value between them will be divided into 256 areas. Only I and f modes are available
    • Load picture im. Load()
      • Return: pyaccessobj, usually not used. (omitted)
      • This action is usually not required because the system will load first when it is turned on, unless it is a multi frame picture
  • Write data

    • Image to data im.putdata (data, scale = 1.0, offset = 0.0)
      • Return: none, the original image has been modified
      • Data: sequenceobj, the sequence object contains picture data
      • data = data * scale + offset
    • Add the palette im.putpalette (data, rawmode = “RGB”)
      • Return: none, the original color palette is modified
      • Data: sequenceobj, the sequence object contains palette data, which can be string or list, and must have 768 values. Every three values correspond to the color values of R, G and B, or an 8-bit string
      • Rawmode: the mode of the palette
      • The picture must be in P, PA, l or la mode
  • Other picture related

    • Number of frames im. N_ frames
    • Find the picture im.seek (frame)
      • Return: None | EOFError
      • Frame: integer, which frame
    • How many frames is the current position im. Tell ()
      • Return: integer, which frame
    • Display picture im.show (title = none, command = none)
      • Return: none, temporarily archived as ppm file (Unix), PNG file (MacOS), BMP file (win)
      • Title: string, window title
      • Command: the command to display pictures is borrowed from external software, because it is mainly used for debugging

<<< The End >>>

This work adoptsCC agreement, reprint must indicate the author and the link to this article

Jason Yang

Recommended Today

The real problem of Alibaba IOS algorithm can’t hang up this time

More and more IOS developers continue to enter the peak of job hopping in 2020 Three main trends of interview in 2020: IOS bottom layer, algorithm, data structure and audio and video development Occupied the main battlefield. Data structure and algorithm interview, especially figure, has become the main reason for the failure of most first-line […]