yarn container启动失败

时间:2022-07-04 20:45:51

在yarn资源管理的集群上运行spark程序,无法读取的数据多与少,都会报这个错误,但是其他程序在集群上能够正常运行。

16/11/14 00:13:44 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1478851289360_0032_01_000005 on host: gs-server-v-407. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1478851289360_0032_01_000005
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:578)
    at org.apache.hadoop.util.Shell.run(Shell.java:481)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:763)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

通过这个异常,很容易认为是yarn的配置出现了问题,但是无论num-executors和execute-memory设置多大,都是直接报这个错,为了分析那部分出现的问题,将spark程序功能注销,仅保留创建SparkContext语句,但是这次还是报这个错误,于是怀疑是sparkConfig配置有问题,sparkContext配置如下

 /**
     * 初始化 spark context
     * @param isLocal
     * @return
     */
   def initSparkContext(isLocal: Boolean): SparkContext = {
     val conf = new SparkConf().setAppName("redir_parser")
       .set("spark.executor.extraJavaOptions", "-XX:-OmitStackTraceInFastThrow;-XX:-UseGCOverheadLimit")
     return new SparkContext(conf)
   }

通过调整和验证sparkConf参数,发现spark.executor.extraJavaOptions 如果设置了上述两项,-XX:OmitStackTraceInFastThrow 和-XX:UseGCOverheadLimit使用了分号分隔,改成空格分隔即可。