catalogue
One for a good meal every dayExcel
After we import the summarized dining data into the database, the administrative office service compares it with the dining data in the company.
The initial implementation is single threaded, andimport_records
The part after removing multithreading is almost the same.
Read Excel data – > send to administrative service interface
For safety reasons, the online operation was carried out at night. When running, it is found that each data import consumes more than 1s. Running these thousands of data at 10 p.m. makes people collapse.
Waiting is also a dry wait. Go downstairs for two rounds to get some air. The dirty air in the room makes people dizzy, and the cold makes people sober. Suddenly, I thought why not use multithreading?
The first version of multithreading and business processing programs are mixed together, which is as difficult to read as shit. In the next two days, I took some time to refactor several versions to separate a thread pool, iterator andimport_records
。
Much clearer, but the iterator is exposed and needs to beimport_records
Call to judge whether the current task is processed by the current thread, which is similar to the idea of collaborative process.
Exposure is good or bad, but it has basically met the daily use. You can put it aside first. Reading books and watching movies are fun:).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
import threading def task_pool(thread_num, task_fn): if thread_num < = 0 : raise ValueError threads = [] def gen_thread_checker(thread_id, step): base = 1 i = 0 def thread_checker(): nonlocal i i + = 1 # print((thread_id,i,step, i < base or (i - base) % step != thread_id)) if i < base or (i - base) % step ! = thread_id: return False return True return thread_checker for x in range ( 0 , thread_num): threads.append(threading.Thread(target = task_fn, args = (x,thread_num, gen_thread_checker(x, thread_num)))) #Start all threads for t in threads: t.start() #Wait for all child threads in the main thread to exit for t in threads: t.join() |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
|
import argparse import re import requests from openpyxl import load_workbook from requests import RequestException import myThread parser = argparse.ArgumentParser(description = 'import of transaction data from American food to store' ) parser.add_argument( '--filename' , '-f' , help = 'good food to store transaction data Xlsx file path ' , required = True ) parser.add_argument( '--thread_num' , '-t' , help = 'number of threads' , default = 100 , required = False ) parser.add_argument( '--debug' , '-d' , help = 'debug mode ' , default = 0 , required = False ) args = parser.parse_args() filename = args.filename thread_num = int (args.thread_num) debug = args.debug if debug: print ((filename,thread_num,debug)) def add_meican_meal_record(data): pass def import_records(thread_id, thread_number, thread_checker): wb = load_workbook(filename = filename) ws = wb.active for row in ws: #------------------------------------------ if row[ 0 ].value is None : break if not thread_checker(): continue #------------------------------------------ if row[ 0 ].value = = 'date ' or row[ 0 ].value = = 'total ' or not re.findall( '^\d{4}-\d{1,2}-\d{1,2}$' , row[ 0 ].value): continue else : date = str .replace(row[ 0 ].value, '-' , '') order_id = row[ 3 ].value restaurant_name = row[ 5 ].value meal_plan_name = row[ 6 ].value meal_staffid = row[ 10 ].value identify = row[ 11 ].value add_meican_meal_record({ 'orderId' :order_id, 'date' : date, 'meal_plan_name' :meal_plan_name, 'meal_staffid' :meal_staffid, 'identify' :identify, 'restaurant_name' :restaurant_name }) myThread.task_pool(thread_num,import_records) |
This is the end of this article about Python multithreading task instances. For more information about Python multithreading tasks, please search for previous articles on developeppaper or continue to browse the relevant articles below. I hope you will support developeppaper in the future!