GNU/Linux ◆ xterm-256color ◆ bash 2909 views

Repository

radanalyticsio/spark-operator

Commands used in the recording

creates the operator

kubectl apply -f manifest/operator.yaml

creates a new Spark cluster w/ two workers

kubectl apply -f examples/cluster.yaml

scales down the cluster by editing the Custom Resource representing the cluster

kubectl edit sparkcluster my-spark-cluster

deleting the Spark cluster

kubectl delete sparkcluster my-spark-cluster

print another cluster definition cat examples/with-prepared-data.yaml result:

apiVersion: v1
kind: ConfigMap
metadata:
  name: spark-cluster-with-data
  labels:
    radanalytics.io/kind: cluster
data:
  config: |-
    workerNodes: "2"
    masterNodes: "1"
    downloadData:
    - url: https://data.cityofnewyork.us/api/views/kku6-nxdu/rows.csv
      to: /tmp/
    - url: https://data.lacity.org/api/views/nxs9-385f/rows.csv
      to: /tmp/LA.csv

expose NodePort

kubectl expose rc my-spark-cluster-m --type=NodePort

get the spark master url

kubectl get services
minikube service my-spark-cluster-m --url

attach with spark-shell

spark-shell --master=spark://192.168.39.87:32371

More by jkremser

log2rbac 03:20

by jkremser

untitled 00:35

by jkremser

untitled 06:28

by jkremser

untitled 00:41

by jkremser

See all