Python from entry to career change

Time:2019-12-2

Caption: in sophomore year, I found that life was short, so I believed in patulin and began to learn python. After learning for more than half a year, I successfully changed my career to be a front-end. Let’s write a tutorial to help you get started with Python.

Getting started with Python zero Basics

Basic zero starts with the basic knowledge of variables, syntax, data types, functions, scopes, modules and so on

Like basketball, start with a three-step layup:

  1. Introduction to Python basic knowledge, starting from the program foundation, you can understand the code. Three options:
    First: find a python tutorial book, such as head first Python Chinese edition, stupid method to learn python, and concise Python tutorial. Head first series of books are very simple and easy to understand, suitable for liberal arts students to read. From the most basic things, students with programming foundation will feel naive. The other two are general introduction books. You can choose them at will. Some Python related e-books download, password: yjw3.
    Second: see the introductory tutorial of the website. Many people recommend Liao Xuefeng’s Python tutorial
    Third: video course, recommend mooc.com. There are also Netease cloud classes and online schools (there are many university courses in this)
  2. Learn to write some basic Python programs. The example at the back of the concise Python tutorial above can be done. If you want to further master the foundation, you can do a little practice on leetcode easy. (it depends on my personal situation. Anyway, I’m impatient. It’s too boring to write questions, although it’s good to write them.)
  3. Do some interesting small projects, here are 100 Python exercises, very basic. I don’t think it’s tall enough to play the projects in the experimental building. The experimental building is a good website and can do some very interesting things.

The above three steps can make you proficient in Python in 21 days

Tips: it is recommended to use a magic tool, python operation visualization, to view the program operation status, variable status, function call, memory allocation step by step, which is very helpful for understanding the variable life cycle, scope, debugging and understanding the program.
Development tools: pycharm is recommended, with free community version or professional version registered by edu email.

Python advanced

Advanced is to focus on a field of Python for in-depth research. Python mainly includes AI field (NLP, deep learning, image processing, whatever it can do), web development (back-end services, crawlers), data processing (data analysis, scientific computing), tools (such as reading and writing excel, writing automation scripts), desktop development (GUI tools), etc.
Python is so powerful. I want to write Python again.

Here is a brief introduction to what I know:

Web development

Python web framework is a powerful tool for building websites. For the establishment of less complex CMS system (such as news website, blog website), Django is strong enough to have no friends and incomparable development efficiency. For sites that focus on flexibility, flask can be the first choice, flexible and small, very elegant framework.

  1. Django starts with the official documentation to understand the basic concepts. Then start to do practical projects, such as Django blog development system tutorial
  2. For the introduction of flask, see the official document, the same as Django.

Crawler (network data acquisition)

First, popular science, web crawler, can be understood as a spider crawling on the network all the time. The Internet is like a big web, and the crawler is the spider crawling around on this web. If it encounters resources, it will grab them. For example, when it grabs a web page, it finds a path, which is actually a hyperlink to the web page, so it can crawl to another web page to get data. Simply put, use the program to get the data you want from the web page.
Python has a lot of crawler frameworks and is very easy to use.
Getting started:

  1. Understand how web pages are structured
    The basic knowledge of the web page includes:
    Basic knowledge of HTML language
    Understand the concept of Web site’s contracting and receiving (post get)
    A little JS knowledge for understanding dynamic web pages
  2. To parse a web page, you need to learn regular expressions
  3. Select a crawler framework, such as urllib, request, BS4, etc
  4. Look at the official documents, how to use the framework, and then you can keep a reptile.

Tutorial point here

data processing

The crawler above talks about how to get data. Here, we will learn how to analyze and process data, and link to the tutorial.
In scientific computing, Matlab is widely used in data processing, and python, which is omnipotent, can also replace it.
Numpy pandas are the two most important modules in scientific operation. Matplotlib is a very powerful Python data visualization tool, drawing a variety of graphics.

  1. Read the official website documents and understand the basic usage of this library.
  2. Learn some simple projects. The experimental building mentioned above can also be used

AI field

slightly
Quote a basic introduction from other places

  1. Theano is a python library for using sequences to define and evaluate mathematical expressions. It can make it easier to write deep learning algorithm in Python.
  2. Keras is a simplified and highly modular neural network library similar to torch. Theano helps it optimize the tensor operation in CPU and GPU operation at the bottom.
  3. Pylearn2 is a library that references a large number of models and training algorithms such as stochastic gradient. It is widely used in deep learning, and this library is also based on theano.
  4. Lasagne is a lightweight library that can build and train neural networks in theano. It is simple, transparent, modular, practical, specific and restrained.
  5. Blocks is a framework that helps you model neural networks on top of theano.
  6. Caffe is a deep learning framework based on the idea of clear expression, high speed and modularity. It was developed by the Berkeley Center for vision and learning (bvlc) and online community contributors. Google’s deep dream artificial intelligence image processing program is built on the Caffe framework. This framework is a BSD licensed C + + library with Python interface.
  7. Nolearn contains a large number of wrappers and abstractions in other neural network libraries, of which lasagne is the most notable one, which also contains some practical modules of machine learning.
  8. Genism is a deep learning toolkit deployed in Python programming language, which is used to process large text sets through efficient algorithms.
  9. Cxxnet is a fast and concise distributed deep learning framework, which is based on mshadow. It is a lightweight and extensible C + + / CUDA neural network toolkit, and has a friendly Python / Matlab interface, which can be used for machine learning training and prediction.

There are too many things here. The basic learning methods are as follows.

Give a full Python video course

Link: https://pan.baidu.com/s/1htryqty password: nc1f

Appendix:

First, let’s see how powerful Python is. Otherwise, we can’t be attracted by it and can’t learn any more.
20 lines of code for face detection and recognition:
Face recognition can realize the function of face recognition through Python or command line. Using Dlib deep learning face recognition technology to build, the accuracy of the outdoor face in the wild is 99.38%.

#Import identification Library
import face_recognition
#Load existing pictures as image library
known_obama_image = face_recognition.load_image_file("face1.jpg")
known_biden_image = face_recognition.load_image_file("face_kid.jpg")
#Encode loaded pictures
obama_face_encoding = face_recognition.face_encodings(known_obama_image)[0]
biden_face_encoding = face_recognition.face_encodings(known_biden_image)[0]
known_encodings = [
    obama_face_encoding,
    biden_face_encoding
]
#Load and encode pictures to be recognized
image_to_test = face_recognition.load_image_file("face2.jpg")
image_to_test_encoding = face_recognition.face_encodings(image_to_test)[0]
#Calculate the difference between this picture and the existing picture
face_distances = face_recognition.face_distance(known_encodings, image_to_test_encoding)
#Set the boundary value of the same face, and output the comparison result 
for i, face_distance in enumerate(face_distances):
    print("The test image has a distance of {:.2} from known image #{}".format(face_distance, i))
    print("- With a normal cutoff of 0.6, would the test image match the known image? {}".format(face_distance < 0.6))
    print("- With a very strict cutoff of 0.5, would the test image match the known image? {}".format(face_distance < 0.5))
    print()

Recommended Today

Manjaro uses SS method (requires nodejs environment)

Just installed manjaro, need to installshadowsocks-qt5+Proxy SwitchyOmega, but the latest Chrome has long blocked the installation of non Google store CRX channels. Here is my solution, which is troublesome but usable. If you are familiar with NPM commands in nodejs, you won’t find it troublesome, because nodejs is used for compilation. Preparation environment: A VPS […]