Tag:colony

  • Hadoop, HBase pseudo cluster installation steps

    Time:2020-10-1

    The version of HBase, Hadoop and JDK must correspond, otherwise it is easy to make mistakes The corresponding relationship between HBase and JDK versions Java Version HBase 1.3+ HBase 2.1+ HBase 2.3+ JDK7 support I won’t support it I won’t support it JDK8 support support support JDK11 I won’t support it I won’t support it […]

  • Graph database design practice | load balancing and data migration of storage services

    Time:2020-10-1

    In the article “nebula Architecture Analysis series (1) storage design of graph database”, we mentioned that the management of distributed graph storage is uniformly scheduled by meta service, which records the distribution of all partitions and the current state of the machine. When the DBA increases or decreases machines, it only needs to input the […]

  • Performance tuning of Apache pulsar in bigo

    Time:2020-9-30

    background With the support of artificial intelligence technology, bigo’s video based products and services are widely welcomed, with users in more than 150 countries, including bigo live and like. Bigo live has sprung up in more than 150 countries, and likee has more than 100 million users and is very popular in generation Z. With […]

  • Spark java + Scala project package (jar)

    Time:2020-9-29

    1. Method 1: Maven packingpom.xml file <plugins> <plugin> <artifactId>maven-assembly-plugin</artifactId> <configuration> <appendAssemblyId>false</appendAssemblyId> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> <archive> <manifest> <! — the class of the main method entry is specified here ch.kmeans2 .SparkStreamingKMeansKafkaExample</mainClass> </manifest> </archive> </configuration> <executions> <execution> <id>make-assembly</id> <phase>package</phase> <goals> <goal>assembly</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.scala-tools</groupId> <artifactId>maven-scala-plugin</artifactId> <version>2.15.2</version> <executions> <execution> <id>scala-compile-first</id> <goals> <goal>compile</goal> </goals> <configuration> <includes> […]

  • Replica sets + sharded cluster cluster cluster based on mongodb 4.0.0 under docker container

    Time:2020-9-29

    target `Using three physical machines for database clusterAny downtime will not affect the online business operationThere won’t be any data loss` programme `The cluster is replica sets + sharded clusterIt has the characteristics of high availability, fail over, distributed storage and so on` `As shown in the figure above, our cluster configuration is as follows:Three […]

  • [deconstructing cloud Nativity] getting to know kubernetes service

    Time:2020-9-28

    Editor’s note: cloud origin is one of the core technology directions pursued by Netease Hangzhou Research Institute (Netease Hangzhou Research Institute). As the technical standard of cloud native industry and the cornerstone of cloud original ecology, the open source container platform kubernetes inevitably has its complexity in design. Based on the summary of senior engineers […]

  • Deploying Flink jobs on kubernetes

    Time:2020-9-28

    Kubernetes Kubernetes (k8s), created by Google, has become the most popular open source choreography system for managing multiple host containerized applications. It provides the mechanism needed to build and deploy scalable and reliable applications for distributed systems. We are living in an era where the uptime of services must be close to 99.9%. To achieve […]

  • Introduction to keepalived foundation of highly available services

    Time:2020-9-28

    We talked about the concept of high availability cluster corosync + pacemaker and the use and description of related tools. Please refer to https://www.cnblogs.com/qiuhom-1874/category/1838133.html Today, let’s talk about the high availability service keepalived; Compared with corosync + pacemaker, keepalived is much lighter. Its working principle is the implementation of VRRP. At the beginning of design, […]

  • How to configure the core, executor and memory resources of spark task

    Time:2020-9-27

    Static allocation: OS 1 core 1gCore concurrency capability < = 5Executor am reserves 1 executor, and the remaining executor = total executor-1Memory reserves 0.07 per executorMemoryOverhead max(384M, 0.07 × spark.executor.memory)Executormemory (total m-1g (OS)) / nodes_ num-MemoryOverhead Example 1 Hardware resources: 6 nodes, 16 cores per node, 64 GB memory Each node reserves 1 core and […]

  • Hadoop framework: building pseudo distributed cluster under single service

    Time:2020-9-27

    Source code:GitHub. Click here || Gitee. Click here 1、 Basic environment 1. Environment version Environment: centos7 Hadoop version: 2.7.2 JDK version: 1.8 2. Hadoop directory structure Bin directory: stores the scripts for HDFS and yarn services of Hadoop Etc Directory: Hadoop related configuration file directory Lib Directory: stores Hadoop’s local library, providing data compression and […]

  • Hadoop fully distributed cluster construction

    Time:2020-9-26

    Cluster planning Hdfs: 1 namenode + n datanodes + 1 2nnYarn: 1 ResourceManager + n nodemanagershadoop1 hadoop2 hadoop3 DN DN DN NM NM NM NN RM 2NN preparation Prepare three virtual machines, create one new one, and then clone two 1. Modify the host name VI / etc / sysconfig / network. The hostnames of […]

  • K8s multi cluster configuration management platform

    Time:2020-9-26

    K8s multi cluster configuration management platform Temporary cluster features Simulate production environment Overall environment description Intranet: 10.17.1.44 [[email protected] account-server]# kubectl get nodes NAME STATUS ROLES AGE VERSION localhost Ready master 25h v1.17.5 [[email protected] account-server]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE cattle-system cattle-cluster-agent-689f8dcc64-7slpk 1/1 Running 0 78m cattle-system cattle-node-agent-7lndv 1/1 Running 0 […]