Tag:colony
-
Time:2021-4-20
1、 Why message system 1. Decoupling Allows you to extend or modify the processes on both sides independently, as long as you make sure they comply with the same interface constraints. 2. Redundancy: Message queue makes data persistent until they have been completely processed, which avoids the risk of data loss. In the “insert get […]
-
Time:2021-4-20
With the expansion of business, the amount of data continues to accumulate, the data capacity and computing power of the database system will gradually be overwhelmed, so an excellent database system must have good scalability. The data nodes in the dolphin DB cluster are integrated with computing and storage, so to improve the computing power […]
-
Time:2021-4-20
Source: Official BlogPyTorch developer community (WeChat official account) Today’s machine learning needs distributed computing. Whether it is training network, adjusting super parameters, service model or processing data, machine learning is computationally intensive. If there is no access to cluster, the speed will be very slow. Ray is a popular distributed Python framework, which can be […]
-
Time:2021-4-18
As early as a few years ago, there was a way of sharing session with session state. Today, let’s summarize the way of sharing session with highly available redis Construction of sentinel cluster Working process diagram to configure Redis data service configuration First, configure the master-slave server of redis and modify it redis.conf The document […]
-
Time:2021-4-18
Introduction to chap 0 From the perspective of history: Spark originated from amplap big data analysis platform of University of California, Berkeley Spark is based on memory computing and multi iteration batch processing Spark takes into account data warehouse, flow processing, graph computing and other computing paradigms, and is a full stack computing platform in […]
-
Time:2021-4-17
background To clarify, I didn’t solve the whole process, I just played a role in soy sauce. Because actually I am in charge of this project and the whole process is quite clear. I also told my colleagues in charge that I would take him to do the project review after some time. As a […]
-
Time:2021-4-17
With the development of the domestic Internet industry, although the mega cluster is not as rare as it was a few years ago, it is also rare, especially the opportunity of performance troubleshooting involving the mega cluster is even rarer. And this time, I have carried out the performance troubleshooting of Hadoop namenode with a […]
-
Time:2021-4-15
Introduction Hello everyone, I will sort out some Java high-frequency interview questions and share them with my friends. I also hope you can use them in the process of looking for a job! This chapter focuses on JavaMessage middlewareShare high frequency interview questions. Q1: What are messages and batches? newsThe data […]
-
Time:2021-4-15
This paper is the practice summary of redis cluster learning (based on redis 6.0 +). It introduces in detail the process of gradually building a redis cluster environment, and completes the practice of cluster scaling. Brief introduction of redis cluster Redis cluster is a distributed database solution provided by redis. It shares data through sharding, […]
-
Time:2021-4-14
In today’s article, we will focus on how to build an extensible data processing platform using smack (spark, mesos, akka, Cassandra and Kafka) stack. Although the stack consists of only a few simple parts, it can implement a large number of different system designs. In addition to the pure batch or stream processing mechanism, we […]
-
Time:2021-4-14
The last time I did itOver a trillionTroubleshooting of Hadoop namenodeThrough four days of hard work, we finally solved the bottleneck problem of Hadoop 2.6.0, but life is often Murphy’s law, and the worst possibilities you try to avoid may eventually happen. In order to avoid the second bottleneck of namenode, I decided to upgrade […]
-
Time:2021-4-9
After setting up spark 1.6.0 locally, in addition to using the SBT command in the official document to package and spark submit the program, we can use IntelliJ idea to develop and debug locally, and then submit the job to the cluster production environment to run. Using ide can improve our development efficiency. My blog […]