Deploying to Akash DeCloud

Deploying to Akash DeCloud

We are now one week into the Akashian challenge - phase 3, and what an incredible week it has been. Hundreds of apps have been deployed by the community, as we can see for ourselves on the chain:

$ akash query market lease list --node https://rpc.edgenet.akash.forbole.com:443 --count-total | grep total
total: "913"

After completing the first set of challenges, I got to work on deploying our Big Dipper block explorer in the Akash DeCloud, to see what the path would be to migrate real applications. As a simple web app with a small backend DB, it seemed to be a great fit for testing in the edgenet (very similar to the app from week 1 challenge 2).

Frankly, I was blown away by the simplicity of the process.

Deploying Big Dipper

Step 1 - Translating

The Big Dipper repo on github contains a docker-compose file which can be used for local development/testing. To deploy this to Akash, all we needed to do was translate this to an Akash SDL file. Given that the key names are very similar in both, this was an easy task (of course, I had a few syntax errors on the first couple of tries, but these were easy to deal with). You can find our SDL file here for comparison.

Step 2 - Deploying

The process to create a deployment with an SDL file is outlined in the documentation. While this part did not go smoothly on the first try, I learned about some helpful ways to debug the process. Once the initial syntax errors were out of the way, the next issue I came across was this one:

$ akash tx deployment create deploy-bd-akash.yaml --from $KEY_NAME --node $AKASH_NODE --chain-id $AKASH_CHAIN_ID -y --fees 5000uakt
Enter keyring passphrase:
Error: group westcoast: invalid total CPU (1000 > 2000 > 0 fails)

I had set the cpu requirement to 1 unit for each of the containers. The limit of 1 unit total per deployment has been set intentionally by the Akash team to ensure that there is enough capacity on the providers for the challenge.

To move forward, I checked the resource usage on a locally running instance and based on this, I thought that a split of 0.4 for mongodb and 0.6 for the front end might just work.

Step 3 - Debugging

With the cpu values updated, I was able to create the deployments and get a lease. The next issue I faced was after sending the manifest to the provider, the lease would initially appear to be active, but after a few seconds it would disappear and close.

watch "akash provider lease-status --node $AKASH_NODE --dseq $DSEQ --oseq $OSEQ --gseq $GSEQ --provider $PROVIDER --owner $ACCOUNT_ADDRESS"

Running the command above showed that the mongodb instance started, but the web instance showed as unavailable. For a few seconds, browsing to the site returned an http 503. Then the web instance crashed, the browser response changed to 404 and the lease-status command returned Error: server response: 500 Internal Server Error. (Note: the Akash team are already aware of the unhelpful 500 error and will release improvements on this).

To troubleshoot this further, I needed to see what was going on inside the container. Using the --help flag of the akash cli utility, I was able to find the following useful command to stream the container logs and understand what was happening:

akash provider service-logs --node $AKASH_NODE --dseq $DSEQ --oseq $OSEQ --gseq $GSEQ --provider $AKASH_PROVIDER --owner $ACCOUNT_ADDRESS -f --service web

And the issue was...another syntax mistake, this time within the env variable METEOR_SETTINGS that is passed to the web container, which was causing it to crash.

Step 4 - Success!

Once the variable was fixed, the app was able to launch successfully - it is now available at http://edgenet.decloud.bigdipper.live/.

It is difficult to overstate how smooth this process has been, and it is equally difficult to overstate the potential that Akash has to disrupt the cloud computing market. We look forward to the launch of Mainnet 2, building and experimenting more, and seeing further adoption/growth of the network!

Deploying to Akash DeCloud

Deploying to Akash DeCloud

We are now one week into the Akashian challenge - phase 3, and what an incredible week it has been. Hundreds of apps have been deployed by the community, as we can see for ourselves on the chain:

$ akash query market lease list --node https://rpc.edgenet.akash.forbole.com:443 --count-total | grep total
total: "913"

After completing the first set of challenges, I got to work on deploying our Big Dipper block explorer in the Akash DeCloud, to see what the path would be to migrate real applications. As a simple web app with a small backend DB, it seemed to be a great fit for testing in the edgenet (very similar to the app from week 1 challenge 2).

Frankly, I was blown away by the simplicity of the process.

Deploying Big Dipper

Step 1 - Translating

The Big Dipper repo on github contains a docker-compose file which can be used for local development/testing. To deploy this to Akash, all we needed to do was translate this to an Akash SDL file. Given that the key names are very similar in both, this was an easy task (of course, I had a few syntax errors on the first couple of tries, but these were easy to deal with). You can find our SDL file here for comparison.

Step 2 - Deploying

The process to create a deployment with an SDL file is outlined in the documentation. While this part did not go smoothly on the first try, I learned about some helpful ways to debug the process. Once the initial syntax errors were out of the way, the next issue I came across was this one:

$ akash tx deployment create deploy-bd-akash.yaml --from $KEY_NAME --node $AKASH_NODE --chain-id $AKASH_CHAIN_ID -y --fees 5000uakt
Enter keyring passphrase:
Error: group westcoast: invalid total CPU (1000 > 2000 > 0 fails)

I had set the cpu requirement to 1 unit for each of the containers. The limit of 1 unit total per deployment has been set intentionally by the Akash team to ensure that there is enough capacity on the providers for the challenge.

To move forward, I checked the resource usage on a locally running instance and based on this, I thought that a split of 0.4 for mongodb and 0.6 for the front end might just work.

Step 3 - Debugging

With the cpu values updated, I was able to create the deployments and get a lease. The next issue I faced was after sending the manifest to the provider, the lease would initially appear to be active, but after a few seconds it would disappear and close.

watch "akash provider lease-status --node $AKASH_NODE --dseq $DSEQ --oseq $OSEQ --gseq $GSEQ --provider $PROVIDER --owner $ACCOUNT_ADDRESS"

Running the command above showed that the mongodb instance started, but the web instance showed as unavailable. For a few seconds, browsing to the site returned an http 503. Then the web instance crashed, the browser response changed to 404 and the lease-status command returned Error: server response: 500 Internal Server Error. (Note: the Akash team are already aware of the unhelpful 500 error and will release improvements on this).

To troubleshoot this further, I needed to see what was going on inside the container. Using the --help flag of the akash cli utility, I was able to find the following useful command to stream the container logs and understand what was happening:

akash provider service-logs --node $AKASH_NODE --dseq $DSEQ --oseq $OSEQ --gseq $GSEQ --provider $AKASH_PROVIDER --owner $ACCOUNT_ADDRESS -f --service web

And the issue was...another syntax mistake, this time within the env variable METEOR_SETTINGS that is passed to the web container, which was causing it to crash.

Step 4 - Success!

Once the variable was fixed, the app was able to launch successfully - it is now available at http://edgenet.decloud.bigdipper.live/.

It is difficult to overstate how smooth this process has been, and it is equally difficult to overstate the potential that Akash has to disrupt the cloud computing market. We look forward to the launch of Mainnet 2, building and experimenting more, and seeing further adoption/growth of the network!