dcos app archive

DC/OS或者Mesos/Marathon集群中需要安装一些常用的App,这里保留一些常用App的JSON文件或者命令信息。

Cassandra

install

dcos package install cassandra --package-version=1.0.7-2.2.5  

这个方法有问题,使用唐总的脚本安装更靠谱。 脚本文件备份在linkerDAPData/smack下面smack_0.9中有。

nginx artifacts

marathon json

{
  "id": "/artifacts",
  "cmd": null,
  "cpus": 0.1,
  "mem": 256,
  "disk": 0,
  "instances": 1,
  "constraints": [
    [
      "hostname",
      "CLUSTER",
      "10.140.0.18"
    ]
  ],
  "container": {
    "type": "DOCKER",
    "volumes": [
      {
        "containerPath": "/usr/share/nginx/html",
        "hostPath": "/home/zhangjie0220/html",
        "mode": "RW"
      }
    ],
    "docker": {
      "image": "nginx",
      "network": "BRIDGE",
      "portMappings": [
        {
          "containerPort": 80,
          "hostPort": 0,
          "servicePort": 10001,
          "protocol": "tcp",
          "labels": {}
        }
      ],
      "privileged": false,
      "parameters": [],
      "forcePullImage": false
    }
  },
  "portDefinitions": [
    {
      "port": 10001,
      "protocol": "tcp",
      "labels": {}
    }
  ]
}

Kafka

  • install kafka
dcos package install kafka --package-version=0.9.2.0  
  • add broker-[0-2]
dcos kafka broker add 0 --cpus 0.5 --mem 256 --heap 256  
&&
dcos kafka broker start 0  
  • add new topic in kafka for dap test-00 env
dcos kafka topic add dap-test-00 --partitions 2 --replicas 2 --broker 0,1,2  

Spark on DC/OS

  • install spark on dcos
dcos package install spark --package-version=1.6.0  
  • deploy jar to spark cluster using DCOS cli[beat, have bugs]
dcos spark run --submit-args='-Dspark.mesos.coarse=true --driver-cores 1 --driver-memory 1024M \  
         --class LinkerStreaming \
         http://10.140.0.18/main_streaming_linker.jar --zookeeper 10.140.0.14:2181 \
         --batch_interval 5 --cassandra_host 10.140.0.17,10.140.0.16,10.140.0.11 \
         --topics dap-test-00 --verbose'
  • deploy jar to spark cluster using DCOS cli[beat, have bugs]
dcos spark run --submit-args='-Dspark.mesos.coarse=true --driver-cores 1 --driver-memory 1024M --class LinkerStreaming http://10.140.0.5:25276/main_streaming_linker.jar --zookeeper 10.140.0.3:2181 --batch_interval 5 --cassandra_host 10.140.0.4,10.140.0.6,10.140.0.7 --topics dap --ckp_dir hdfs://10.140.0.6:9000 --verbose '  
  • deploy jar to spark cluster into Spark Driver
bin/spark-submit --master mesos://10.140.0.14:5050 \  
    --class LinkerStreaming main_streaming_linker.jar --zookeeper 10.140.0.14:2181 \
    --batch_interval 5 --cassandra_host 10.140.0.17,10.140.0.16,10.140.0.11 --topics dap-test-00 --verbose
  • spark conf file
spark.mesos.executor.docker.image adolphlwq/docker-spark:perf-spark-executor-1.6.0-openjdk-7-jre  
spark.executor.memory             2g  
spark.executor.cores              1  
  • DCOS Spark framework json
{
  "id": "/spark",
  "cmd": "mv ./conf/log4j-dispatcher.properties ./conf/log4j.properties && ./bin/spark-class org.apache.spark.deploy.mesos.MesosClusterDispatcher --port $PORT0 --webui-port $PORT1 --master mesos://zk://master.mesos:2181/mesos --zk master.mesos:2181 --host $HOST --name spark",
  "cpus": 1,
  "mem": 6144,
  "disk": 2048,
  "instances": 1,
  "constraints": [
    [
      "hostname",
      "LIKE",
      "10.140.0.16"
    ]
  ],
  "container": {
    "type": "DOCKER",
    "volumes": [],
    "docker": {
      "image": "mesosphere/spark:1.6.0",
      "network": "HOST",
      "privileged": false,
      "parameters": [],
      "forcePullImage": false
    }
  },
  "env": {
    "APPLICATION_WEB_PROXY_BASE": "/service/spark",
    "SPARK_USER": "root"
  },
  "healthChecks": [
    {
      "path": "/",
      "protocol": "HTTP",
      "portIndex": 1,
      "gracePeriodSeconds": 5,
      "intervalSeconds": 60,
      "timeoutSeconds": 10,
      "maxConsecutiveFailures": 3,
      "ignoreHttp1xx": false
    }
  ],
  "labels": {
    "DCOS_PACKAGE_RELEASE": "4",
    "DCOS_PACKAGE_SOURCE": "http://10.140.0.18/version-2.x",
    "DCOS_PACKAGE_COMMAND": "eyJwaXAiOlsiaHR0cDovL2FydGlmYWN0cy5tYXJhdGhvbi5tZXNvcy9kY29zX3NwYXJrLTAuNS42LXB5Mi5weTMtbm9uZS1hbnkud2hsIl19",
    "DCOS_PACKAGE_METADATA": "eyJsaWNlbnNlcyI6W3sibmFtZSI6IkFwYWNoZSBMaWNlbnNlIFZlcnNpb24gMi4wIiwidXJsIjoiaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL2FwYWNoZS9zcGFyay9tYXN0ZXIvTElDRU5TRSJ9XSwibmFtZSI6InNwYXJrIiwicG9zdEluc3RhbGxOb3RlcyI6IlRoZSBBcGFjaGUgU3BhcmsgREMvT1MgU2VydmljZSBoYXMgYmVlbiBzdWNjZXNzZnVsbHkgaW5zdGFsbGVkIVxuXG5cdERvY3VtZW50YXRpb246IGh0dHBzOi8vc3BhcmsuYXBhY2hlLm9yZy9kb2NzL2xhdGVzdC9ydW5uaW5nLW9uLW1lc29zLmh0bWxcblx0SXNzdWVzOiBodHRwczovL2lzc3Vlcy5hcGFjaGUub3JnL2ppcmEvYnJvd3NlL1NQQVJLIiwic2NtIjoiaHR0cHM6Ly9naXRodWIuY29tL2FwYWNoZS9zcGFyay5naXQiLCJkZXNjcmlwdGlvbiI6IlNwYXJrIGlzIGEgZmFzdCBhbmQgZ2VuZXJhbCBjbHVzdGVyIGNvbXB1dGluZyBzeXN0ZW0gZm9yIEJpZyBEYXRhIiwicGFja2FnaW5nVmVyc2lvbiI6IjIuMCIsInRhZ3MiOlsiYmlnZGF0YSIsIm1hcHJlZHVjZSIsImJhdGNoIiwiYW5hbHl0aWNzIl0sInBvc3RVbmluc3RhbGxOb3RlcyI6IlRoZSBBcGFjaGUgU3BhcmsgREMvT1MgU2VydmljZSBoYXMgYmVlbiB1bmluc3RhbGxlZCBhbmQgd2lsbCBubyBsb25nZXIgcnVuLlxuUGxlYXNlIGZvbGxvdyB0aGUgaW5zdHJ1Y3Rpb25zIGF0IGh0dHBzOi8vZG9jcy5tZXNvc3BoZXJlLmNvbS91c2FnZS9zZXJ2aWNlcy9zcGFyay91bmluc3RhbGwvIHRvIGNsZWFuIHVwIGFueSBwZXJzaXN0ZWQgc3RhdGUuIiwibWFpbnRhaW5lciI6InN1cHBvcnRAbWVzb3NwaGVyZS5pbyIsInNlbGVjdGVkIjp0cnVlLCJmcmFtZXdvcmsiOnRydWUsInZlcnNpb24iOiIxLjYuMCIsInByZUluc3RhbGxOb3RlcyI6Ik5vdGUgdGhhdCB0aGUgQXBhY2hlIFNwYXJrIERDL09TIFNlcnZpY2UgaXMgYmV0YSBhbmQgdGhlcmUgbWF5IGJlIGJ1Z3MsIGluY29tcGxldGUgZmVhdHVyZXMsIGluY29ycmVjdCBkb2N1bWVudGF0aW9uIG9yIG90aGVyIGRpc2NyZXBhbmNpZXMuXG5XZSByZWNvbW1lbmQgYSBtaW5pbXVtIG9mIHR3byBub2RlcyB3aXRoIGF0IGxlYXN0IDIgQ1BVIGFuZCAyR0Igb2YgUkFNIGF2YWlsYWJsZSBmb3IgdGhlIFNwYXJrIFNlcnZpY2UgYW5kIHJ1bm5pbmcgYSBTcGFyayBqb2IuXG5Ob3RlOiBUaGUgU3BhcmsgQ0xJIG1heSB0YWtlIHVwIHRvIDVtaW4gdG8gZG93bmxvYWQgZGVwZW5kaW5nIG9uIHlvdXIgY29ubmVjdGlvbi4iLCJpbWFnZXMiOnsiaWNvbi1zbWFsbCI6Imh0dHBzOi8vZG93bmxvYWRzLm1lc29zcGhlcmUuY29tL3VuaXZlcnNlL2Fzc2V0cy9pY29uLXNlcnZpY2Utc3Bhcmstc21hbGwucG5nIiwiaWNvbi1tZWRpdW0iOiJodHRwczovL2Rvd25sb2Fkcy5tZXNvc3BoZXJlLmNvbS91bml2ZXJzZS9hc3NldHMvaWNvbi1zZXJ2aWNlLXNwYXJrLW1lZGl1bS5wbmciLCJpY29uLWxhcmdlIjoiaHR0cHM6Ly9kb3dubG9hZHMubWVzb3NwaGVyZS5jb20vdW5pdmVyc2UvYXNzZXRzL2ljb24tc2VydmljZS1zcGFyay1sYXJnZS5wbmciLCJzY3JlZW5zaG90cyI6bnVsbH19",
    "DCOS_PACKAGE_REGISTRY_VERSION": "2.0",
    "DCOS_PACKAGE_FRAMEWORK_NAME": "spark",
    "DCOS_PACKAGE_VERSION": "1.6.0",
    "SPARK_URI": "http://downloads.mesosphere.io.s3.amazonaws.com/spark/assets/spark-1.6.0.tgz",
    "DCOS_PACKAGE_NAME": "spark",
    "DCOS_PACKAGE_IS_FRAMEWORK": "true"
  },
  "portDefinitions": [
    {
      "port": 10004,
      "protocol": "tcp",
      "labels": {}
    },
    {
      "port": 10006,
      "protocol": "tcp",
      "labels": {}
    }
  ]
}

Spark App在nginx里

linkerConnector

Docker Pulls

This binary is built in docker image adolphlwq/linkerconnector

  • 运行在单节点进行测试的json
{
  "id": "/linker-connector",
  "cmd": "/linkerConnector -r /linker/proc  -i 2000 -d kafka -t dap-test-00 -s 10.140.0.16:1025,10.140.0.10:1025,10.140.0.19:1025 -c http://localhost:10000",
  "cpus": 0.01,
  "mem": 32,
  "disk": 0,
  "instances": 1,
  "constraints": [
    [
      "hostname",
      "LIKE",
      "10.140.0.19"
    ]
  ],
  "container": {
    "type": "DOCKER",
    "volumes": [
      {
        "containerPath": "/linker/proc",
        "hostPath": "/proc",
        "mode": "RO"
      },
      {
        "containerPath": "/tmp",
        "hostPath": "/tmp",
        "mode": "RW"
      }
    ],
    "docker": {
      "image": "linkerrepository/linker_connector",
      "network": "HOST",
      "privileged": true,
      "parameters": [],
      "forcePullImage": false
    }
  },
  "portDefinitions": [
    {
      "port": 10007,
      "protocol": "tcp",
      "labels": {}
    }
  ]
}
  • 运行每个slave同时不重复
{
  "id": "/linker-connector",
  "cmd": "/linkerConnector -r /linker/proc  -i 2000 -d kafka -t dap-test-00 -s 10.140.0.16:1025,10.140.0.10:1025,10.140.0.19:1025 -c http://localhost:10000",
  "cpus": 0.01,
  "mem": 32,
  "disk": 0,
  "instances": 6,
  "constraints": [
    [
      "hostname",
      "UNIQUE"
    ]
  ],
  "container": {
    "type": "DOCKER",
    "volumes": [
      {
        "containerPath": "/linker/proc",
        "hostPath": "/proc",
        "mode": "RO"
      },
      {
        "containerPath": "/tmp",
        "hostPath": "/tmp",
        "mode": "RW"
      }
    ],
    "docker": {
      "image": "linkerrepository/linker_connector",
      "network": "HOST",
      "privileged": true,
      "parameters": [],
      "forcePullImage": false
    }
  },
  "portDefinitions": [
    {
      "port": 10007,
      "protocol": "tcp",
      "labels": {}
    }
  ]
}
  • stable-v1
/linkerConnector -r /linker/proc -i 2000 -d kafka -t mic-test -s 10.140.0.8:1025 -c http://localhost:10000 -f false

{
  "type": "DOCKER",
  "volumes": [
    {
      "containerPath": "/linker/proc",
      "hostPath": "/proc",
      "mode": "RO"
    },
    {
      "containerPath": "/dev",
      "hostPath": "/dev",
      "mode": "RO"
    },
    {
      "containerPath": "/var/run/docker.sock",
      "hostPath": "/var/run/docker.sock",
      "mode": "RO"
    }
  ],
  "docker": {
    "image": "linkerrepository/linker_connector:stable-v1",
    "network": "HOST",
    "privileged": true,
    "parameters": [
      {
        "key": "pid",
        "value": "host"
      }
    ],
    "forcePullImage": true
  }
}
  • stable-v2
{
  "id": "/linker-connector",
  "cmd": "/linkerConnector proc -r /linker/proc -q /linker/sys -i 10000 -d kafka -t gce0515 -s 10.140.0.7:1025 -c http://localhost:10000 -f false",
  "cpus": 0.01,
  "mem": 32,
  "disk": 0,
  "instances": 2,
  "constraints": [
    [
      "hostname",
      "UNIQUE"
    ]
  ],
  "container": {
    "type": "DOCKER",
    "volumes": [
      {
        "containerPath": "/linker/proc",
        "hostPath": "/proc",
        "mode": "RO"
      },
      {
        "hostPath": "/sys",
        "containerPath": "/linker/sys",
        "mode": "RO"
      },
      {
        "hostPath": "/var/run/docker.sock",
        "containerPath": "/var/run/docker.sock",
        "mode": "RO"
      }
    ],
    "docker": {
      "image": "linkerrepository/linker_connector:stable-v2",
      "network": "HOST",
      "privileged": true,
      "parameters": [],
      "forcePullImage": true
    }
  },
  "portDefinitions": [
    {
      "port": 10007,
      "protocol": "tcp",
      "labels": {}
    }
  ],
  "env": {},
  "labels": {},
  "healthChecks": []
}

Jupyter Notebook

Jupyter dap

{
  "id": "/jupyter-dap",
  "cmd": null,
  "cpus": 1,
  "mem": 2048,
  "disk": 4096,
  "instances": 1,
  "container": {
    "type": "DOCKER",
    "volumes": [
      {
        "containerPath": "/home/jovyan/work",
        "hostPath": "/home/zhangjie0220/jupyterwp",
        "mode": "RW"
      }
    ],
    "docker": {
      "image": "adolphlwq/docker-jupyter:pyspark-notebook-dap",
      "network": "HOST",
      "privileged": true,
      "parameters": [
        {
          "key": "pid",
          "value": "host"
        },
        {
          "key": "user",
          "value": "root"
        }
      ],
      "forcePullImage": false
    }
  },
  "env": {
    "TINI_SUBREAPER": "true",
    "PASSWORD": "password",
    "USE_HTTPS": "yes",
    "GRANT_SUDO": "yes"
  },
  "portDefinitions": [
    {
      "port": 30005,
      "protocol": "tcp",
      "labels": {}
    }
  ]
}

Jenkins

Json for Marathon:

{
  "volumes": null,
  "id": "/cicd/jenkins",
  "cmd": null,
  "args": null,
  "user": null,
  "env": null,
  "instances": 1,
  "cpus": 0.5,
  "mem": 2048,
  "disk": 0,
  "gpus": 0,
  "executor": null,
  "constraints": [
    [
      "hostname",
      "LIKE",
      "10.140.0.11"
    ]
  ],
  "fetch": null,
  "storeUrls": null,
  "backoffSeconds": 1,
  "backoffFactor": 1.15,
  "maxLaunchDelaySeconds": 3600,
  "container": {
    "docker": {
      "image": "adolphlwq/docker-jenkins:dind",
      "forcePullImage": false,
      "privileged": true,
      "portMappings": [
        {
          "containerPort": 8080,
          "protocol": "tcp"
        },
        {
          "containerPort": 50000,
          "protocol": "tcp"
        }
      ],
      "network": "BRIDGE"
    },
    "type": "DOCKER",
    "volumes": [
      {
        "containerPath": "/var/run/docker.sock",
        "hostPath": "/var/run/docker.sock",
        "mode": "RO"
      },
      {
        "containerPath": "/var/jenkins_home",
        "hostPath": "/tmp/jenkins_home",
        "mode": "RW"
      }
    ]
  },
  "healthChecks": null,
  "readinessChecks": null,
  "dependencies": null,
  "upgradeStrategy": {
    "minimumHealthCapacity": 1,
    "maximumOverCapacity": 1
  },
  "labels": null,
  "acceptedResourceRoles": [
    "slave_public"
  ],
  "residency": null,
  "secrets": null,
  "taskKillGracePeriodSeconds": null,
  "portDefinitions": [
    {
      "port": 10005,
      "protocol": "tcp",
      "labels": {}
    },
    {
      "port": 10007,
      "protocol": "tcp",
      "labels": {}
    }
  ],
  "requirePorts": false
}

HDFS

hdfs运行在固定节点,以遍备份数据,Json for Marathon:

{
  "id": "/hdfs",
  "cmd": null,
  "cpus": 0.6,
  "mem": 1797,
  "disk": 0,
  "instances": 1,
  "constraints": [
    [
      "hostname",
      "CLUSTER",
      "10.140.0.7"
    ]
  ],
  "container": {
    "type": "DOCKER",
    "volumes": [
      {
        "containerPath": "/hdfsdata",
        "hostPath": "/tmp/hdfsdata",
        "mode": "RW"
      }
    ],
    "docker": {
      "image": "dockerq/docker-hdfs",
      "network": "HOST",
      "privileged": true,
      "parameters": [],
      "forcePullImage": true
    }
  },
  "env": {
    "HDFSURL": "10.140.0.7"
  },
  "portDefinitions": [
    {
      "port": 10006,
      "protocol": "tcp",
      "labels": {}
    }
  ]
}

New Spark on DC/OS

  1. linkerConnector8月份代码对应的提交指令
dcos spark run --submit-args='-Dspark.mesos.coarse=true --driver-cores 1 --driver-memory 2048M --total-executor-cores 2  
--class LinkerStreaming http://10.140.0.5:25276/main_streaming_linker.jar --zookeeper 10.140.0.3:2181 --batch_interval 5
--cassandra_host 10.140.0.4,10.140.0.6,10.140.0.7 --topics dap --ckp_dir hdfs://10.140.0.7:9000/dap --verbose '
  1. 添加date,去掉uuid后的提交指令
dcos spark run --submit-args='-Dspark.mesos.coarse=true --driver-cores 1 --driver-memory 2048M --total-executor-cores 2  
--class LinkerStreaming http://10.140.0.5:25276/main_streaming_linker_v1.jar --zookeeper 10.140.0.3:2181 --batch_interval 5
--cassandra_host 10.140.0.8,10.140.0.4,10.140.0.7 --topics dap-dev-v1 --ckp_dir hdfs://10.140.0.7:9000/dap_v1 --verbose '

data 备份:linkerDAPData/jupyter_data目录下

dcos spark run --submit-args='-Dspark.mesos.coarse=true --driver-cores 1 --driver-memory 2048M --total-executor-cores 3 --class LinkerStreaming http://10.140.0.5:25276/mainstreaminglinkerdev.jar --zookeeper 10.140.0.3:2181 --batchinterval 5 --cassandrahost 10.140.0.8,10.140.0.6,10.140.0.4 --topics mic-test --ckpdir hdfs://10.140.0.7:9000/mic-test --verbose'

dcos spark run --submit-args='-Dspark.mesos.coarse=true --driver-cores 2 --driver-memory 2048M --total-executor-cores 4 --class LinkerStreaming http://10.108.1.101:8081/mainstreaminglinkerd1.jar --zookeeper 10.108.1.102:2181 --batchinterval 5 --cassandrahost 10.108.1.106,10.108.1.106 --topics mic-test --ckpdir hdfs://10.108.1.108:9000/mic-test --verbose'