Skip to content
This repository has been archived by the owner on Nov 23, 2017. It is now read-only.

Submitting to EC2 cluster #95

Open
cantide5ga opened this issue Mar 29, 2017 · 10 comments
Open

Submitting to EC2 cluster #95

cantide5ga opened this issue Mar 29, 2017 · 10 comments

Comments

@cantide5ga
Copy link

I'm surprised that I wasn't able to find spark-submit anywhere on the master.

What are other folks doing to submit to Spark when using spark-ec2? Using an external system with it's own Spark package to remotely spark-submit? How would that work for code deployed and disseminated across the cluster?

@shivaram
Copy link
Contributor

spark-submit should be in the master in /root/spark if the setup completed successfully

@cantide5ga
Copy link
Author

@shivaram This is good to hear - but went through the process multiple times and /root/spark only has /conf.

I'll dig in some more to see if I come up with something, thanks! Will follow-up shortly.

@cantide5ga
Copy link
Author

Confirmed a couple more times and seemingly no errors on my end. If this isn't an issue for anyone else, any tips for figuring out what is going on here?

@cantide5ga
Copy link
Author

oh, this wasn't loud enough in the logs:

Initializing spark
--2017-03-29 19:05:47--  http://s3.amazonaws.com/spark-related-packages/spark-1.6.2-bin-hadoop1.tgz
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.1.75
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.1.75|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2017-03-29 19:05:47 ERROR 404: Not Found.

ERROR: Unknown Spark version
spark/init.sh: line 137: return: -1: invalid option
return: usage: return [n]
Unpacking Spark
tar (child): spark-*.tgz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
rm: cannot remove `spark-*.tgz': No such file or directory
mv: missing destination file operand after `spark'

Read the docs that we could specify the Spark package. Is it required?

@cantide5ga
Copy link
Author

cantide5ga commented Apr 11, 2017

Read the docs that we could specify the Spark package. Is it required?

Bump to this. Willing to push an update to make this required if the above is expected behavior when not specifying repo url or version.

@shivaram
Copy link
Contributor

I think this is a specific problem with hadoop version 1 and spark 1.6.2. can you try passing hadoop version as 2 or yarn and see if it works

@cantide5ga
Copy link
Author

To be clear, I've been getting past this by specifying a commit hash which I prefer anyhow. But yes, I will give this a try to provide some feedback. Thanks!

@cantide5ga
Copy link
Author

adding --hadoop-major-version 2 to launch fixed it.

Anything we should do to either circumvent in code and/or document? Feel free to close if not.

@shivaram
Copy link
Contributor

I think it would be great if we could change the default to not be the failure case -- Can you send a PR changing the default hadoop version to either 2 or yarn ?

@cantide5ga
Copy link
Author

You got it. Busy next few days but will follow through.

Will also include some documentation on the use of --hadoop-major-version which is seemingly missing from README.

Thanks again.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants