# Coda Protocol Testnet [Beta] Retrospective — Week 6

### 2019-09-04

A new testnet release ‘Grab Bag’ is here with an entire set of new, fun challenges! We also have an exclusive BONUS this week for new members! Join this week to get double Testnet Points* on ‘New Member’ challenges and place yourself on the leaderboard with 100+ other members! (Read on for details on challenges.)

## Table of Contents

Public Testnet Beta Milestone

Week 7 Challenges

Retrospective Week 6

Community in Action

Challenge Winners Week 6

## Public Testnet Beta Milestone

We’ve hit an important public Testnet Beta milestone this week — 100 active members who racked up over 200,000 testnet points*! Check out the leaderboard.

If you haven’t started yet, this week will be great timing. We have a New Member Welcome Bonus for double testnet points* and our team is ready to help new members to get started. Previously, we had a live demo to show members how to set up a node on the testnet. Send us (o1christine#8079 or any other O(1)Labs team member) a dm on Discord, and we can hop on a call to help you get started! We are also happy to help experienced users when they get stuck, have questions about this week’s challenges, or about anything else!

## Week 7 Challenges

Challenge #16 “New Member Welcome Bonus”

This week, we have a package deal with a big bonus for new members. This would be a great chance to invite your friends to join! New members who complete all of the following three challenges this week will receive two times the total points* value as a bonus: #1 ‘Connect to Testnet’, #3 ‘Join Discord’, and #6 ‘Nice to Meet You’ (check out all challenge descriptions here). So instead of 700 pts* (respectively 500 + 100 + 100), new users will receive 1400 pts* ! — Get started!

Challenge #17 “Hello Memo”

Did you know that Coda supports 32bytes of memos in its transactions? You can fit a SHA256 hash. Think of the possibilities! For this challenge, we’d like you to send a single transaction with a memo inside of it. coda client send-payment now supports a -memo flag. In that memo please stick the string “Hello Memo”. You can send this transaction to anyone, for example a friend. You’ll earn 500 pts* for doing so. As always, please hit the faucet with your discord account so that we can associate a public key with your discord username in order to add your score to the leaderboard.

Challenge #18 “Oops”

Coda also supports cancelling transactions — you just need to make sure to cancel it before it gets included inside a block! For this challenge, we’d like you to cancel a transaction. This means, (a) you must send a transaction and (b) take the transaction-id that comes out and then cancel it again with coda client cancel-transaction. You’ll earn 500 pts* for doing so. In order to incentivize nodes to accept your cancellations, a fee is debited from your account greater than the fee that’s present in the transaction pool. You’ll know if the cancellation when through if after a while you notice your balance is lowered (by the fees from the cancellation). As always, please hit the faucet with your discord account so that we can associate a public key with your discord username in order to add your score to the leaderboard.

Challenge #19 “GraphCoolL”

Coda has a GraphQL API! It’s super experimental, and we are already in the process of changing several parts of it, but we’ve noticed that some in the community have already successfully built interesting tools on top of our API. We’re interested in getting your feedback! We want you to build something cool on GraphQL and tell us how we can make it better. You’ll earn 500 pts* for building something and including some sort of constructive feedback (note anything you have issues with, you wish were different, things that were easy, etc). Please share it as a post on discourse with a [GraphQL] tag. [GraphQL] Your title here. In order to receive points*, you must (a) include your source code (via a link to a public repo on github or embedded on the forums) and license it under the Apache2 license and (b) include some sort of constructive feedback (note anything you have issues with, you wish were different, things that were easy, etc).

You’ll earn 500 pts* for sending us anything that we feel has achieved (a) and (b), as described above, and we’ll award a BONUS of an additional 2000 pts* for the coolest use and 1000 pts* for second place! Good luck.

## Retrospective Week 6

“In general, if any branch of trade, or any division of labour, be advantageous to the public, the freer and more general the competition, it will always be the more so.” — Adam Smith, The Wealth of Nations

We’re entering week 7 of the Coda Public Testnet Beta. Last week was the first time snark work took center stage, as the challenge revolved around nodes competing to generate and sell zk-SNARK proofs in what we affectionately call the “snarketplace”. This was an important week, as it allowed us to test the economic assumptions that underpin the snarketplace, as it is intended to be a permission-less and dynamic system. We anticipate that by allowing as many nodes to generate snarks, over time the prices will go down and become advantageous to end-users of the Coda network. That said, this week was only the first dry run at this experiment, and we had some cool outcomes. Notes from the week:

Snark work challenge

• Initially, when we launched the challenge, we were surprised to see so many nodes charging nothing for their SNARK work. When we dug into why this was happening, we realized that we had made an error in the challenge copy! The copy initially said that the metric for success was the number of SNARK proofs generated — when in fact we wanted to reward users that generated the most in fees over the week. While this mixup was frustrating for node operators, it helped validated our assumptions because nodes optimized for selling the SNARKs by reducing their fees to 0. This was great to see, as the market prices behaved rationally when the goal was a fire-sale (albeit unintended).
• When we updated the copy to prioritize selling fees, many snark workers weren’t able to sell their proofs because nodes in other time zones who hadn’t seen the update were consuming all the SNARK jobs. This helped confirm the logic that block producers, when given a choice, will buy lower-cost SNARK proofs.
• Interestingly, the O(1) Labs nodes were still selling proofs even though the fee was set to 2 coda. The reason why was that we ran our nodes with the flag -work-selection seq meaning we were producing proofs sequentially, while other nodes had defaulted to -work-selection rand, or random. What this means is that other nodes were producing work that wasn’t as prioritized, as block producers will always buy proofs in order. This allowed the O(1) nodes to charge a higher fee, as nobody was competing for the same jobs. In fact, rand is quite inefficient in practice, as it has a 16x overhead against an optimal job selection for our network parameters. Once we mentioned this to other node operators, the playing field became more level. This is an area where we would love community input and discussion on improving the work selection algorithm. If anyone has thoughts and wants to contribute, please comment in Discord or the forums. We will also follow up with a more thorough explanation of how SNARK work is queued and processed.

Implicit nonce finding issue

• As part of this challenge, we added a repeater-bot that would send transactions automatically every couple of minutes (with a large fee, to enable more expensive snark work). We noticed that after a while, the repeater transactions stopped getting accepted by the network because of invalid nonces. Note that all transactions include a nonce to protect against double spends. We worked around it by just killing and restarting this node — but we do have a hypothesis as to why it’s happening:
• We compute the nonce not only by looking inside the ledger but also looking at our local transaction pool to see what other transactions are already enqueued. We believe, but haven’t verified, that what happens is some transactions are on this node’s transaction pool but failed to propagate through the network for some reason. As this situation is likely unavoidable, we need a different solution. We plan on looking at other cryptocurrency protocols to see what they do in this situation — if you have thoughts please share on discord or the forums!

Memory leaks

• We’re still not sure what’s causing the memory leaks, but we’re adding more instrumentation to improve observability. We’ve been using the process of elimination to figure out the root cause, and we are pretty sure that this is happening somewhere in the networking layer. See this Github issue for more details: https://github.com/CodaProtocol/coda/issues/2979
• Some folks may be wondering why we didn’t use a language with resource safety for the protocol (ex. Rust) which would, when used properly, protect us against this sort of issue. The short answer is that OCaml allows for greater development velocity while retaining performance — OCaml enables an easiness to the creating embedded DSLs which we took advantage of with Snarky our SNARK programming language. OCaml’s runtime performance, while adequate, will somewhat soon become a bottleneck for us in certain sections of our codebase. We intend to offload that work using helper processes written in Rust.

libp2p

• We’ve added libp2p for peer discovery under a CLI flag! Please note that it is still under development and therefore buggy. We’re asking community members to try it and report bugs, but be aware that it is not stable enough to join the network seamlessly. Stay tuned for further updates as we migrate from Kademlia to libp2p.

Thanks again to the community for your participation in the testnet. This was a fun week, and we hope to refine and continue to test the snarketplace. This week’s challenges are a grab bag — see here for more details.

## Community in Action

Once we updated the SNARK challenge #15 from producing the most zk-SNARKs to earning the most testnet tokens by selling SNARK work, the users started to change their strategies right away. The Community MVP of the week, garethtdavies, created a script which outputs how users are ranked based on the SNARK fees. This information was greatly welcomed by all, competition wouldn’t be as exciting without a ranking board!

The testnet beta users used creative ways to get ahead in the competition. Everyone kept an eye on their competitors on the snarketplace (marketplace for zk-SNARKs), and it did not go unnoticed that user ansonlau3 succeeded in selling his SNARK work for a very favorable fee.

It has been incredibly fun to test and explore the dynamics of the snarketplace with the community. Users experimented with the SNARK worker fees, the selection of different SNARK works, increasing the number of transactions, and more. We are grateful to all members who joined us on the journey to develop novel technologies.

## Challenge Winners Week 6

Each week, we reward the community leaders who are making testnet a great, collaborative experience. You can see each week’s winners on the new leaderboard:

Challenge #15 — Something Snarky:

This week’s challenge was to explore the Snarketplace by producing snark work and earning tokens from the snark work fees. 37 node operators produced at least one SNARK, and the users that earned the most testnet coda from these fees are:

1. LatentHero who earned 2241 testnet coda — LatentHero even managed to sell two SNARKs with a fee of 999 testnet coda!

2. Dmitry D who earned 1274 testnet coda

3. y3v63n who earned 849 testnet coda

Challenge #4 — ‘Community MVP’:

Gold 1000 pts* : garethdavies for being a leader in making the testnet a great, collaborative experience. He created a ranking board for the SNARK workers, continuously improved it, and shared it with the community, bounced off ideas related to GraphQL API during the office hours calls with the team, shared findings related to the challenge with other testers, and he is always around to help out in the community. Congratulations, garethtdavies, it’s great to have you here!

Silver 500 pts* : whataday2day for being one of the most active community members last week and since the testnet launch, helping out community members when they have questions, and making it easy for them to navigate through resources by adding information to the Community Wiki on Coda Protocol’s forum. (Check out the instructions here if you would like to make contributions to the community Wiki too!)

Challenge #7 — ‘You Complete Me’:

Major Contribution 1000 pts* : garethdavies for creating scripts/PRs for ranking SNARK workers, and identifying the owners of the public keys.

The entire team at O(1) Labs would like to express our sincere gratitude and appreciation to our growing community.

We look forward to meeting you over on Discord!

*Testnet Points (abbreviated pts) are designed solely to track contributions to the Testnet and Testnet Points have no cash or other monetary value. Testnet Points are not transferable and are not redeemable or exchangeable for any cryptocurrency or digital assets. We may at any time amend or eliminate Testnet Points.

Read more →

# Coda Protocol Testnet [Beta] Retrospective — Week 5

### 2019-08-27

Welcome to week 6 of the Coda Public Testnet Beta. This week’s challenge, Something Snarky, is all about generating zk-SNARKs. Node operators will be competing to produce as many SNARKs as possible to sell to block producers on the snarketplace — read on for more details. In addition, we have a brand new leaderboard design to keep track of your place on the list.

## Week 6 Challenges

Challenge #15 — Something Snarky:

This week’s challenge makes snark work the primary objective. Node operators that produce at least one SNARK will get 1000 pts*.

BONUS: Top 3 node operators who are able to sell the most SNARKs in the snarketplace (meaning your SNARK was not just produced, but also selected by a block producer) will win an additional 3000, 2000, and 1000 pts* respectively. Hint — your SNARKs are more likely to be bought if the fees are lower, so feel free to experiment with the economics! ;)

Challenge #15 ‘Something Snarky’

(Updated) This week’s challenge is to earn testnet tokens by performing SNARK work. Node operators that produce at least one SNARK will get 1000 pts*.

BONUS: Top 3 node operators who are able to earn the most tokens with their SNARKs on the snarketplace will win an additional +3000 pts* , +2000 pts* , and +1000 pts* respectively. Hint — your SNARKs are more likely to be bought if the fees are lower, so feel free to experiment with the economics!

Week 5 Retrospective

Week 5 was an opportunity to retrospect on the state of the network over the past month and reflect on what could be improved to ensure a better experience for node operators. Maintaining the Coda testnet is a significant operational load, and as a small team of core contributors, each live issue we were debugging took precious time away from coding, hardening infrastructure, and developing processes. As such, we took a much needed week off from ops and support and we’re heads down on fixing issues. Here’s what we focused on during week 5:

Nightly builds

We began the process of automatically deploying a nightly build of the testnet that builds the code in the develop branch and spins up a local testnet on AWS to run for 12 hours. This is critical, as it gives us an opportunity to catch low hanging bugs that manifest in straightforward conditions. By having this base level of testing, we can ensure that the bugs that do manifest in a released testnet end up being more complex, and due to unpredictable user interactions that will help us have a much more specific surface area of testing.

The transition from Kademlia to libp2p

One of the issues we’ve been noticing over the past 4 weeks is the tendency of the network to fork. We’ve changed our protocol parameters and taken measures to prevent this on our side, but we’ve still encountered forks each week. As this investigation details, much of our issues were due to Kademlia as our peer discovery layer. We finally bit the bullet and decided to transition to libp2p sooner than later. We made a lot of progress and estimate that this will get merged in by next week’s release.

Memory profiling

Several users reported daemon crashes due to memory leaks, so we spent some time profiling memory usage by the daemon. We added a flag -memory-profiling that runs the daemon with profiling enabled, using a variant of OCaml that supports statistically profiling the heap. See this PR for more details on profiling support and implementation: https://github.com/CodaProtocol/coda/pull/3247.

Thanks again to the community for accommodating one week of downtime for the testnet — we hope to continuously strengthen the testnet so that the experience improves week over week.

Community MVP Winners

Each week, we reward the community leaders who are making testnet a great, collaborative experience. You can see each week’s winners on the new leaderboard:

Challenge #12 — ‘CLI FYI’:

We had 24 submissions for CLI feature requests, and many of them are great ideas that we’re adding to the roadmap to implement in the near future. The top five ideas were:

1st — 3000 pts*: Ilya | Genesis Lab (see feature)

2nd — 2500 pts*: alexander (see feature)

3rd - 2000 pts*: whataday2day(see feature)

4th — 1500 pts*: novy (see feature)

5th — 1000 pts*: pk (see feature)

Challenge #5 — Bug Bounties

Minor Bug Bounty 200 pts*: garethtdavies (https://github.com/CodaProtocol/coda/issues/3288)

Challenge #7 — ‘You Complete Me’

1000 pts*: garethdavies for his contributions to the documentations and his extensive blog on Coda GraphQL API, which allows the development of future tools such as block explorers, wallets, and more. Check out his blog here.

Challenge #14 — ‘Leonardo da Coda’ (GIF contest)

300 pts* 1st place: alexander

200 pts* 2nd place: Pk and alexander (shared)

100 pts* 3rd place: onemoretime

Thanks again to the Coda community members who have been participating in the testnet. There are a lot more cool features and challenges in store, so stay tuned and stay connected!

* Testnet Points (abbreviated pts) are designed solely to track contributions to the Testnet and Testnet Points have no cash or other monetary value. Testnet Points are not transferable and are not redeemable or exchangeable for any cryptocurrency or digital assets. We may at any time amend or eliminate Testnet Points.

Read more →

# Coda Protocol Testnet [Beta] Retrospective — Week 4

### 2019-08-20

Today, we’ve arrived at the fifth week since Coda’s first public testnet [BETA] was released. It has been incredibly fun to dive in with the core group of our community evangelists. While stress-testing the network, testing main parts of the core protocol, staking, and producing blocks, unexpected situations arised and the community graciously embraced these surprises. The testing provided us with invaluable feedback, and instead of having a new testnet release, we will focus on improving the infrastructure and stabilizing the testnet this week. The community is also invited to join us in this retrospective week, and earn Testnet Points* along the way! Read on for more details.

## Week 5 Challenges

The total number of Testnet Points* that 105 users racked up on the leaderboard went over 140,000 last week! If you missed out on previous challenges, it is not too late to catch up, since we’re having our biggest points* challenge yet this week! You can also earn bonus points* if you were active in the previous week(s), see Challenge #11 ‘Don’t Break the Chain!’. Check out all challenges here.

New challenges are here for you to rack up more Testnet Points*!

Challenge #12 “CLI FYI”

Submit a product improvement or feature you’d like to see in the Coda command line interface (CLI). Post a new thread on the Discourse forums in the “Product” category and add this to the title: “[CLI Feature]”. The community can vote on it by “hearting” the post, and comment / discuss details in the thread. Add your Discord username to be counted for pts*.

Every feasible feature suggested will get 500 pts*. Top 5 features will win a bonus - and the community gets to vote for top 5. Bonus: 2500, 2000, 1500, 1000, 500 pts* respectively. Feasible feature means well scoped ideas that Coda could technically implement- eg. The block producing CLI command should tell you % likelihood of winning a block and the time until the next slot you can produce blocks for. No guarantees that suggested features will be implemented. But if you submit a PR implementing one, you could win a massive bonus of 5000 pts*, which means that you could win up to 8000 pts* with this challenge!

Challenge #13 “My two codas”

We are overwhelmed by the active involvement and enthusiasm from the community for the Testnet [BETA]. Your contributions and feedback are invaluable to us. Earn 400 Testnet Points* for giving your two codas by filling out this survey.

Challenge #14 “Leonardo da Coda”

We’re holding a community arts contest! Bring out your most creative self to create Coda-related GIFs and emoji’s! Post your GIF and emoji on the forum in respectively this thread for GIFs and this thread for emojis. You can have unlimited number of entries so cut yourself loose! The community can vote on the best entries by “hearting” the entry posts, so do not forget to also “heart” your favorite entries! Top 3 entries will receive bonus points: 300 pts* for the best GIF and emoji, 200 pts* for the second place and 100 pts* for the third place.

## Week 4 Retrospective

Another week, another testnet release put to trial. This past week, about thirty block producers were in action, each staking about ~3% of the total coda supply to participate in consensus. This was an important testnet, as the Coda network can only be decentralized if it supports many parties participating in consensus. This week was the first test of this in the wild, and was successful as we saw over 20 nodes successfully produce blocks and receive coda for their efforts. However, we also found some bugs that we were able to squash and further harden the protocol.

There were three main technical issues that emerged this past week:

1. Initially, there was no snark work being done, and only when 128 transactions were queued up to be snarked did the snark work begin. The reason for this delay in starting snark work was a bug that caused an “off by one” error. This was relatively easy to fix, and should be resolved moving forward.

2. Some block producers were not able to produce blocks, despite being live and connected to their nodes for multiple days. Initially, the thought process was that it was just pure bad luck for these block producers, and they just had to wait for their time to roll around. Shout out to garethdavies, who alerted us to this not being the case, as they had an impressive uptime of 2 days, 4 hours, and 54 minutes, but no blocks produced. With garethdavies’ help we debugged this issue and found that when the set-staking command was used to start the block producer, it didn’t produce blocks for some users. When garethdavies switched to passing a -propose-key flag to the daemon when starting it, they were able to begin producing blocks. We will dig into this further to find out why the commands behaved differently.

3. The fork strikes again — in the previous testnet, there was a long fork issue that caused the network to partition. We thought we had debugged this and attributed it to two reasons:

• Colocating block producers with snark workers on the O(1) labs nodes, which may have delayed block production, thereby forking the chain when a new block was produced on a lagging tip
• Parameters, Δ (delta — acceptable network delay in number of slots, in which past blocks can be accepted) and k (the number of blocks until the network assumes finality), were configured in a way that caused “forkiness”. Specifically, when k is smaller than Δ, what can happen is that blocks are finalized and thrown away while nodes are still waiting for them. This can cause nodes to get stuck on a tip unable to receive any more blocks, and then will have to build on top of that, causing a fork. This was resolved by establishing an invariant that k should always be greater than Δ.

However, given that another fork happened on day 6 of this testnet, and didn’t resolve itself, we dug further into this issue. This time, the fork was due to a partition in the network where the peer graph had a bridge with one node (the seed node). When that node dropped out, the graph split into two networks and was unable to heal. You can find the full investigation here: https://forums.codaprotocol.com/t/8-17-medium-rare-fork-investigation/113.

All said and done, a successful week in terms of uncovering issues and fixing issues. We will take a break this week and not have the testnet live, so that the Coda development team can focus on stability again and ensure that these same issues don’t pop up in future testnets. Thanks again for the community’s support and patience in this process — our spirits are boosted when we see comments like these:

## Community in Action

When last week’s testnet ‘Medium Rare’ was released, the early adopters in the community were eager to dive in and start staking.

More than 40 members signed up to participate on ‘Medium Rare’ as block producers and committed themselves to run their nodes throughout the week. 30 of them were selected and they received 20,000 testnet coda’s to participate in staking and produce blocks. One of the community members, awacate, enthusiastically started to promote her block producer shortly after she got her node up.

The coolest thing to see, was the core group of the community evangelists banding together to get the block production started, and celebrating when everyone started to successfully sync with the blockchain..

…however, not everyone was lucky enough to get selected for proposing the next block. One member, jspadave, could have used some more luck at the beginning!

…but no worries! In the end, he produced about 30 blocks last week. In total, the passionate block producers in the community generated more than 400 blocks last week! Read on to see who produced the most blocks last week and won 4000 Testnet Points*!

## Community MVP Winners

Each week, we reward the community leaders who are making testnet a great, collaborative experience. Please join us in congratulating the Community MVP Winners!

Winners Challenge #9 ‘Block Party’

Last week, we had one of our biggest points* challenge yet, with up to 4000 points* on the line for the person who produces the most blocks. Congratulations to the winners!

1st place 4000 pts* - @dk808 (47 blocks!) 2nd place 3000 pts* - @y3v63n 3rd place 2000 pts* - @Gs

1000 pts* for everyone who produced at least one block: @Prague @jspadave @Dmitry_D @novy@Alexander @Danil_Ushakov @LatentHero @whataday2day @garethtdavies @Ilya123 @TipaZloy @Bison_Paul @Hunterr84 @MattHarrop/_Figment_Network @OranG3 @boscro @PetrG @ttt

Bug Bounties

Major Bug Bounty 2000 pts*: @garethdavies for helping debug the CLI command that enabled block production but didn’t generate any blocks. (see Github)

Minor Bug Bounty 200 pts*: @doronz2 for finding an issue related to logging when bootstrapping. (see Github)

Community MVP Gold (+1000 Testnet Points*)

@whataday2day for always being around, being actively involved in the testnet from the start, and being helpful to other community members when they get stuck.

Community MVP Silver (+500 Testnet Points*)

@dk808 , @Bison Paul , @Ilya | Genesis Lab , @Alexander , @Turbotrainedbamboo for active participation in the Testnet [Beta] and helping with uncovering odd behaviour leading to forks.

## Conclusion

The entire team at O(1) Labs would like to express our sincere gratitude to all testnet participants for joining early on this unpredictable and fun ride, their unwavering support and belief in Coda.

*Testnet Points are designed solely to track contributions to the Testnet and Testnet Points have no cash or other monetary value. Testnet Points are not transferable and are not redeemable or exchangeable for any cryptocurrency or digital assets. We may at any time amend or eliminate Testnet Points.

Read more →

# Coda Protocol Testnet [Beta] Retrospective — Week 3

### 2019-08-13

This is the fourth week that the Coda public testnet has been live! Last week, with the help of the early adopters in the community, we successfully stress-tested the network with increased transaction throughput, allowing us to identify potential issues and resolve them before the mainnet launch. A big thanks to all the testnet participants who have stayed active through the entire process, we wouldn’t be able to test out the protocol in the wild without all your contributions!

This week’s testnet release is called ‘Medium Rare’ - a pun on steak, since users will be staking their coda this week to earn new Testnet Points*.

## Week 4 Challenges

More challenges are ready for you to win points*! Last week, we saw some fierce competition as participants battled to see who could send the most transactions. Read on for winners of last week and see where you rank now on the leaderboard.

This week’s challenges:

Challenge #9 ‘Block Party’

This is one of our biggest points challenge yet, with up to 4000 points* on the line for the winner. In order to complete this challenge, you’ll need to stake coda to be an active block producer. Block producers who generate more than 14 blocks this week will successfully complete this challenge - 1000 pts*. BONUS for those who produce the most blocks: 1st place - +3000 pts*, 2nd place - +2000 pts*, 3rd place - +1000 pts*.

Challenge #10 ‘Bookworm’

Reach Trust Level 1 ‘Basic’ on Coda Protocol’s forum. We believe reading is the most fundamental and healthy action in any community. If you are willing to spend a little time reading, you will quickly be promoted to the first trust level. Sign up on our forum , enter at least 5 topics, read at least 30 posts, and spend a total of 10 minutes reading posts to obtain Trust Level 1 ‘Basic’ and 100 Testnet Points*.

Challenge #11 ‘Don’t Break the Chain!’

Earn extra bonus points for every uninterrupted weeks that you participated in the testnet and earned points*.

If you earned points this week and last week - 100 pts*

If you earned points this week and the previous 2 weeks - 200 pts*

If you earned points this week and the previous 3 weeks - 300 pts*

Etc.

You can still earn Testnet Points* with past challenges! Find them here.

## Community in Action

In the third week after the release of Testnet [Beta], the team felt courageous enough to invite the early Coda adopters to stress-test the network and to see how the testnet would behave under high transaction volumes (see challenge #8 ‘Bonanza’). This was a success — more so than we imagined! As noticed by one of the community members, novy, the transaction throughput from everyone participating allowed us to surface issues in parts of the codebase that we can now dig into and fix.

This week has been a rollercoaster ride. At one point we were experiencing a long fork. One fork consisted of 287 blocks and the other fork 225.

After some time, it seemed that this bug was resolved on its own, and the community was celebrating the recovery..

..but we had to brace ourselves for another dive on the roller coaster - about 9 hours of heavy testing from our dedicated community later, the network was down again.

After a few more ups and downs on the ride, we decided that is has been enough fun and excitement for the week. We took down the network to dig into the newly discovered bugs and to fix them. This finally gave the users like jspadave and whataday2day room to take a breather.

## Week 3 Retrospective

“Everybody has a plan until they get punched in the mouth.” — Mike Tyson

In this case, Mike may as well have been talking about cryptocurrency networks, because all bets are off the table when transaction volumes spike. Last week’s challenge as mentioned above was ‘Bonanza’, where users were encouraged to send as many transactions as possible. We knew we were in for some fun surprises, but could not have predicted the outcome. The increased transaction throughput from everyone participating allowed us to surface issues in the codebase as well as operationally. See below for more details:

Issue 1 — Snark work bottleneck

Initially there was a network choke point due to a bottleneck in snark work. In Coda, there is a tree-like data structure that tracks the “transaction snark scan state”, which consists of transactions queued up to be snarked. This data structure became saturated quickly and queued up work was not being picked up by enough workers. The O(1) Labs nodes run snark workers that usually pick up the slack, but in this situation, the transaction throughput was higher than usual.

The main issue was that all the O(1) Labs nodes were running both block producers and snark workers, causing both roles to compete for compute on the same machines. We nice nice’ed those workers (nice is a program that allows you to reduce the priority of processes), but even those nice nice’s were too much for our VMs and we couldn’t produce a block fast enough to fill a slot. This triggered an edge case that we had never exercised, causing all our nodes to die!

The fix was relatively simple - we spun up a new machine (with 96 cores!) purely dedicated for snark work.

Issue 2 — The looooooong fork

Midway through the week, the network experienced a long fork where two chains diverged quite significantly. Adding to the woes, all of the O(1) Labs nodes were stuck on one fork, while many of the community members’ nodes were on the other. Since the O(1) Labs had the majority of the coda staked, the other nodes were not able to produce blocks, and were not observing any “correct” blocks being gossiped in the network. This basically froze the network due for most participants.

The cause for the long fork was primarily due to a low k value. k is a constant that measures the number of blocks until the network assumes finality for blocks. This helps control the consensus mechanism. We artificially set the constant low in order to make the network move faster and hit more edge cases. Unfortunately, we were a bit overzealous, and this caused more forking in the network, as blocks were assumed to be finalized very quickly.

In this case, after enough epochs (a measurement of time in Coda), the forks resolved on their own - but the downside was that all the transactions that users get on the “dissolved” fork were lost!

Issue 3 — Strange Digits

The transaction mempool uses very interesting data structures under the hood (if you’re into diving deeper check out this document). While we had very good unit test coverage on this data structure, it turned out that under certain workloads which hadn’t appeared in our internal testing, we triggered a crash here. And this is a viral crasher, for all the nodes that you gossip the transaction to, everyone crashes! This is still under investigation and you can track the issue here: https://github.com/CodaProtocol/coda/issues/3143

Thanks to all the community members for bearing with the technical difficulties this week. While the network may have struggled, the challenge was a success, allowing Coda developers to test theoretical code in production and harden the protocol.

This week’s testnet release is called ‘Medium Rare’ - a pun on steak, since users will be staking their coda this week. The url to connect will be: medium-rare.o1test.net:8303

## Community MVP Winners

Each week, we reward the community leaders who are making testnet a great, collaborative experience. Please join us in congratulating the Community MVP Winners!

Winners Challenge #8 ‘Bonanza’

Last week, we had our biggest points challenge yet, with up to 4000 points* on the line for the person who sends the most transactions. Congratulations to the winners!

1st place 4000 pts* - @Dmitry D (over a thousand transactions!)

2nd place 3000 pts* - @y3v63n

3rd place 2000 pts* - @whataday2day

Bug Bounties

Major Bug Bounty 2000 pts*: @AlexanderYudin - (see github)

Minor Bug Bounty 200 pts*: @ssh - (see github)

Community MVP Gold / Challenge #7 ‘You Complete Me’ (+1000 Testnet Points*)

@alexander for translating testnet instructions into Russian and adding useful screenshots. This makes it very helpful for other technical Russian community members to get connected to Testnet [Beta]! You can find them here.

@whatday2day for setting up step by step instructions to run a testnet node on WSL. Currently, the official documentation includes instructions for macOS and Linux. @whataday2day provided step by step instructions to get Coda running on Windows Subsystem for Linux. The instructions are added to our Community Wiki on Coda Protocol’s forum.

Community MVP Silver (+500 Testnet Points*)

@jspadave for always being around, positive and for his unwavering perseverance to keep going to the testnet faucet, even if he did not even get a drip to drink. Thank you for working with the engineering team to improve the faucet transactions, @jspadave!

@ssh for always being around, continuously supporting the team by providing useful feedback, and giving pointers for improvement (e.g. incorrect weblinks, align blockchain terminology, etc.).

@joe_land1 for always being around, support other community members when they get stuck, and also for offering help to them to run a testnet node on Windows.

For participants who were not able to send 20 transactions last week due to network issues, don’t worry! We will have another challenge to send transactions where you will have the chance to win full points from this week again.

## Conclusion

We are overwhelmed by the community’s contributions and enthusiasm to participate in the Testnet [Beta] and to tackle the challenges. The transaction throughput that the community created in the last week proved to be of incredible value to help us strengthen the foundations for a new succinct blockchain and a decentralized economic system.

We would like to express our sincere gratitude to all testnet participants for joining early on this unpredictable and fun ride, and their unwavering support.

*Testnet Points (abbreviated ‘pts.’) are designed solely to track contributions to the Testnet and Testnet Points have no cash or other monetary value. Testnet Points are not transferable and are not redeemable or exchangeable for any cryptocurrency or digital assets. We may at any time amend or eliminate Testnet Points.

Read more →

# Coda Protocol Testnet [Beta] Retrospective — Week 2

### 2019-08-07

This week, we have a new release, ‘Share the Wealth’ and three new challenges to help you earn Testnet points*.

Since we launched Coda’s first public testnet in beta on July 24th, 2019, we’ve had 56 Codans connect and 74 rank on the leaderboard. Competition has been pretty fierce, but fellow Codans are a helpful bunch. Head on over to Discord and Docs to start tackling challenges!

## Week 3 Challenges

Fresh challenges are here! See where you rank on the leaderboard and post some points for the week.

Challenge #6 ‘Nice to Meet You’

Join the new Coda forum and introduce yourself in the ‘Introductions’ thread! Again, name, location and what you’re excited about are all good things to share in order to get the points! — 100 Testnet points*

Challenge #7 ‘You Complete Me’

Contribute documentation/material to help others to get started on the Testnet. Many people are interested in running a node, but some need a little help getting started.

• 1000 Testnet points* for major contributions — e.g. new content which is not touched upon yet in the official testnet docs: instructions in other languages, instructions for other operating systems besides macOS and Linux, etc.)
• 500 Testnet points* for minor contributions — e.g. complementary content to the official testnet docs: additional notes, etc.)

Challenge #8 ‘Bonanza’

Send as many transactions as you can! We’ll link the public key to a discord account by seeing what you use to ask the faucet for funds (and we’ll only use the first public key that you requested tokens as measurement).

• 1000 Testnet points* for sending 20 transactions
• Bonus: 1st place +3000, 2nd place +2000, 3rd place +1000 for 1st, 2nd, and 3rd place winners respectively for most transactions sent over the week! If you’re just joining now, check out past challenges and point values in our docs.

## Week 2 Retrospective & Week 3 Details

This release marks two weeks that the ‘Hello, Coda’ public testnet has been live and available for anyone to connect! In the first week, we were thrilled as the community embraced Coda, and began tinkering with the testnet, highlighting both what went well:

…and what could have gone better:

Looking at all the feedback, we came to one conclusion — the second week’s theme would be stability. Because we believe in open development and access for all, we shipped Coda’s testnet as early as we thought was possible. This ensured some level of stability so that there was actually something to play with, but as users realized, the services and network would go down after heavy and continuous use.

Thus, we did not ship another release in week 2, as we needed to take some time and harden the next release. A little bit more detail around our release history to set some context:

The team initially developed on a master branch, and cut a release branch about ~3 weeks before the initial testnet launch. When we were thinking about a week 2 release, we realized that we would potentially be shipping three weeks’ work in master with only one week of testing. Knowing that this could cause a regression, we pulled back and focused on both ensuring the new code was stable, and addressing existing issues. Additionally, Week 2 challenges centered around helping community members and surfacing new bugs.

So far, this strategy has proven correct as the new testnet release, ‘Share the Wealth’, feels much more stable and has a few main improvements:

• Tiny (the faucet dog) and Echo bot correctness and uptime has been improved
• Much cleaner status messages and logging
• Plus some bug fixes (including some of the crashes and p2p layer issues)
• This release also marks the first one where we’ve switched over from Groth-Maller to Bowe-Gabizon SNARKs!

You can find more details and the changelog in the release notes, and check out the docs for information on how to get started with the new release.

Two things to note about the new release:

• If you already have Coda installed, you’ll need to upgrade the package to make sure you’re not using the old client — see the installation section for details.
• Your testnet tokens do not carry over from ‘Hello, Coda’ to ‘Share the Wealth’. This way we all start on an even playing field for this week’s challenges.

Additionally, the Coda development team has switched to a git-flow workflow. TL;DR: develop will be the branch with active development, master will be stable, and release branches will contain code to be shipped. Releases will also happen weekly at 2pm PT on Tuesdays, and code-freeze is enforced Wednesday at 2pm of the previous week — as we automate more of our QA, we’ll push our code-freezes out to further days. Coda is entirely open source, and everyone is encouraged to get involved and begin contributing!

## The Community in Action

In the second week after we launched the first testnet, ‘Hello, Coda’, new early-adopters, like Bj below, are still joining the testnet, and completing challenges.

This week, the number of users who successfully ran a Coda node and sent their first transactions, doubled to 54. It’s fun to see how the community likes to play around with the testnet. Codans, like pk, are even doing a little match-making, finding peers to exchange coda with.

Some community members have become a little attached to their testnet coda, but we’re not complaining.

Whatever the fee was, sadly for Bj, and other Codans that accrued coda over the last two weeks, funds will not carry over to the new release and new challenges (read on for details).

Another highlight this week was seeing all the highly-technical Codans in action. Joe_land1 and whataday2day took it to the next level by setting up nodes in operating systems that aren’t officially supported yet. Currently, the documentation includes instructions for macOS and Linux. These two combined forces to get Coda running on Windows.

After overcoming some struggles, they succeeded in running testnet nodes using Windows Subsystem for Linux.

Last week, we also hosted our first Office Hours for the testnet with about 15 community members joining. If you missed it, there will be more opportunities to meet our team and to get answers to testnet questions. Click here if you want to add the Testnet [Beta] Office Hours to your calendar.

## Community MVP Winners

Each week, we reward the community leaders who are making testnet a great, collaborative experience. In the first two weeks, the community members below stood out. Please join me in congratulating the Community MVP Winners!

Community MVP Gold (+1000 Testnet points*)

@ssh for the spirited participation in this wild ride of launching the testnet! Ssh used their technical knowledge to help other community members get connected to the testnet and did a lot of troubleshooting when they got stuck. Our technical team is super thankful. Thanks, ssh! (week 1 winner)

@Alexander for putting very detailed video & docs together for our technical Russian community. It proved to be very helpful for members to start their validator’s journey. (week 1 winner)

@pk for becoming a leader in the community with their positive and helpful attitude since the testnet [Beta] was released. @pk also documented all its steps to connect to the testnet. It is open-source, so other community members are invited to join and contribute as well! (week 2 winner)

Community MVP Silver (+500 Testnet points*)

@Dmitry D for jumping in when our testnet bot Tiny went asleep during the weekend. His transactions made it possible for many new validators in the community to have a positive experience and fun with the testnet as well! (week 1 winner)

@turbotrainedbamboo for creating a cool visualization on which nodes he was chatting to around the world, and for sharing it with the community. If you want to take a look, the visualization was featured in our previous blog post Testnet [Beta] Retrospective Week 1 (week 1 winner)

@garethdavies for his active involvement in the community since the release of testnet [Beta]. Whether it’s providing the team with feedback on the testnet, helping out other community members to set up a node or sending transactions when our testnet bot fell asleep, @garethdavies was often around! (week 2 winner)

The entire team at O(1) Labs would like to express our sincere gratitude and appreciation to our growing community.

We look forward to meeting you over on Discord and our new forum!

Read more →

# Coda Protocol Testnet [Beta] Retrospective — Week 1

### 2019-07-29

Coda Protocol launched its first public testnet in beta on July 24th 2019 at 2pm PST. As the first succinct blockchain, it generated significant interest from the community.

## The Community in Action

Our community of early-adopters showed up in full-force, with over 250 people joining Discord. Of those, 32 people successfully ran a Coda node, connected to the network, and sent their first transaction. An additional 32 people started playing around in the documentation, and are working on getting their Coda nodes set up for Week 2.

The community is now active in all parts of the globe, with particularly strong participation coming from Australia, China, Europe, Russia, and North America. One community member, turbotrainedbamboo, created a cool visualization of the peers he was chatting to on 07.29.2019 at 7:16 PM PST and shared it with the community.

It’s clear from the conversations on the Coda Discord server that our community is highly-technical, loves a challenge, and is passionate about the long-term potential of establishing the foundations of a new, decentralized economic system.

With the release of the testnet beta, we also launched Testnet Points* to recognize community members’ contributions to the testnet and the weekly challenges. We didn’t really anticipate how popular the point system would be… but the whole team was receiving messages from every channel like this all week:

I guess everyone loves a little good old-fashioned competition? And don’t worry… she got her points!

You can check out the public leaderboard to see how all 59 community members are ranking.

The coolest part of seeing the community in action came at a moment when the testnet tooling hit a potential roadblock. On July 26th, at 7am PST, our faucet bot, ‘Tiny’, went down for a period of time, meaning that no one could get funds to participate in the network. Our team was slowly getting up and making their way online, but a community member, ssh, stepped up and sent another community member, Wallace, 45 coda, allowing them to continue sending transactions on the network. This gave time for our team to troubleshoot and restore service to Tiny. While this could have potentially been a negative experience, the Coda community showed its cooperative spirit and banded together.

## Testnet Philosophy

When building Coda, we strive to be as transparent and permissionless as possible. We open sourced our code in October 2018, and since then have focused on sharing developments with the community as early as possible, including going live with Coda’s first public testnet. That fundamental decision meant that our first week was a bit chaotic, unpredictable, a little bit of a wild ride at times, and a ton of fun.

## Week 1 Retrospective

We launched the first testnet, ‘Hello, Coda’, at 2pm on Wednesday, July 24th. The first few days after launch went very smoothly and we felt quite confident about the testnet stability. But per Murphy’s law, it was only on Friday and the weekend that some bugs started throwing us wrenches. By Friday, our internal nodes had been running straight for 3 days, and a combination of the continuous uptime plus all of the activity on the network triggered a file descriptor resource leak (too many open files). This caused all of the nodes internal to O(1) Labs to crash at around the same time.

While rejoining the network wasn’t an issue, this presented a problem for restarting our nodes. Namely, we’ve yet to finish implementing ledger persistence, so none of our nodes had the ledger state, meaning that all the data might have been lost. Fortunately, one community node remained connected to the network, allowing one of our nodes to bootstrap to the most recent state, share it with our other nodes and stabilize the network. This event showed the beauty of Coda’s peer-to-peer network, where community nodes play a big role in helping secure the network and recover from issues.

A number of other bugs were also discovered by the community, which we are prioritizing fixing this week:

• The above mentioned file descriptor bug which briefly took down our networkCocRefresh
• Syncing to the network is only triggered when a block is received over gossip, leading to slow initial sync times (sometimes 10+ minutes)
• A bug on OS X caused random crashes for nodes on that platform every few daysCocRefresh
• A bug in Kademlia leading to nodes connecting to few (sometimes just 1) peers
• Missing parameters in our GraphQL API, preventing our testnet bots from sending more than 1 transaction per block

Thanks to everyone who has helped find bugs and isolate these issues so far! We’re really excited to work towards a robust, decentralized piece of software together.

As a reminder, our hours are 10–6 PM PST, M-F. We’ll be doing our best to provide support during these hours, but may not be available right away outside of them. We’ll also be doing our best to keep the network running if something like the above happens again, but no guarantee the network won’t go down as we fix the issues.

## Release Schedule

We plan to release new versions of the testnet software every Tuesday at 2pm PST. As it is still early days, and our team is figuring out the cadence that works best for us and our community, this may change. We will keep the community updated.

Additionally, since last week was a short week, we will continue to run the original version released last week. The next software release will occur on 08.05.2019 at 2pm PST.

## NEW Challenges — Week 2

As we kick off Week 2, we are adding two on-going challenges that will run for multiple weeks. Have fun and see how many testnet points* you can rack up! If you have questions, join Office Hours on 07.30 @ 4–5pm PST.

Challenge #4 (on-going): Community MVP — Various pts.* We saw so many people becoming leaders in the community and want to recognize them. Each week, we will recognize the winners of the previous week based on the point values below. We may give out all the awards in a week, or none, or several at each level. The more active the community is, the more points we can award in this category. Winners announced on Discord each week.

• Gold — 1000 pts.— made a major, or on-going contribution to the community throughout the week. A major stand-out.
• Silver — 500 pts.— always there, always helping, always positive.

Challenge #5 (on-going): Major and Minor Bug Bounties — Various pts.

• Major — 2000 pts.— reported a new daemon crash that wasn’t already on the known issues list.
• Minor — 200 pts.— reported a new issue related to minor bugs in the daemon, documentation, or testnet.

Since last week was a short week, we are continuing Week 1 challenges all this week. If you haven’t received your Week 1 points*, it’s not too late…

## Week 1 Challenges

Challenge #1: Connect to Testnet — 1000 pts. for anyone who sends a transaction to the echo service (see the docs for more instructions).

Challenge #2: Community Helper — 300 pts. are awarded to anyone who helps another member of the community. This could include answering a question, helping them navigate the docs, and generally giving support and encouragement for those trying hard to get involved. We can only award points for what we see, so make sure you’re doing it in one of the official testnet channels so everyone can learn!

Challenge #3: Join Discord — 100 pts. awarded for introducing yourself in the #testnet-general channel. Name, location and what you’re excited about are all good things to share in order to get the points!

The entire team at O(1) Labs would like to express our sincere gratitude and appreciation to our growing community.

We look forward to meeting you over on Discord!

*Testnet Points (abbreviated ‘pts.’) are designed solely to track contributions to the Testnet and Testnet Points have no cash or other monetary value. Testnet Points and are not transferable and are not redeemable or exchangeable for any cryptocurrency or digital assets. We may at any time amend or eliminate Testnet Points.

Read more →

# O(1) Labs Releases Testnet [Beta] for Coda Protocol, the First Succinct Blockchain

### 2019-07-24

We’re thrilled to announce that Coda Protocol launched its first public testnet in Beta today, July 24th 2019 at 2pm PST. We’ve seen interest from more than 700 members of our community in joining, so we expect that this will be an exciting and engaging process thanks to the efforts of a core group of our community evangelists.

In this blog post, we will introduce Coda at a high level, including our motivations, philosophy, and technology innovations, as well as share a bit more about the testnet process and how the community can get involved.

## About Coda

At the core of blockchains and cryptocurrencies is that they allow us to trade centralized institutions for decentralized networks. At a base, technological level, this comes from a distributed network of validators being able to come to consensus about the state of the blockchain. Put a different way, networks are decentralized in large part because of the breadth of the network of validators who can independently validate them.

A decade into the blockchain and cryptocurrency industry, however, we’re discovering an interesting quirk creating a hidden force for centralization. Blockchains are, increasingly, victims of their own success.

As a blockchain is utilized more and more (a good thing), the chain becomes longer and longer. As this happens, it decreases the pool of people who have the computational and bandwidth capacity to participate in validating and securing the network (a bad thing). Over time, this pushes blockchains towards a new form of centralization with only those with the highest network and computational capacity at the top of the food chain.

At its base, Coda is designed to address this problem. Our goal is to enable user-owned, universally-accessible digitized finance through the cryptographic innovation of a tiny, accessible, constant-sized blockchain.

## Here’s how it works.

Whereas other blockchains grow with every new transaction, we replace the blockchain with a constant-sized zero knowledge proof only the size of a few kilobytes, or a page of text. This proof is a replacement of having to check the entire blockchain’s history, standing in exactly for the process of verifying the underlying transactions. What’s more, it can be updated efficiently by using recursive zk-SNARKS.

Let’s use the name succinct blockchain to refer to the combination of a state (consisting of the set of account balances and information required for consensus) and a zk-SNARK which says “there is a blockchain which you could have downloaded that would have convinced you that this is actually the current state”. Just as we can update a plain old blockchain by appending a new block to the end, we can update a succinct blockchain by applying the block to get an updated state and combining the previous proof with the new block to get an updated proof.

We accomplish this is by creating a proof that says “there is a proof that said state S had a blockchain backing it AND there is a single block which when applied to state S gives us that new state”. Because zk-SNARKs are succinct, this new proof is still as tiny and easy to verify as the last. For those who want to learn more, this technique is called recursive composition of zk-SNARKs.

There’s obviously a lot to unpack in terms of how this works, so if you want to dive in deeper, we recommend you check out this technical introduction to Coda.

## Why this matters.

What does this approach do for Coda though? In short, it means that we can have a cryptocurrency that is constant-sized and resource-efficient. In turn, this addresses accessibility problems and the inevitable success-enabled form of centralization discussed above.

Practically, this means we can have a better form of programmable money:

• No compromisable 3rd parties that have unknown incentives, create dependencies on their uptime, or lack accountability to have to rely on to trust the state of the blockchain and use the cryptocurrency. Users can run nodes from any device at any time.
• The Coda javascript API has equivalent security to a full node. This creates minimal complexity to integrate and build with Coda.
• The fact of Coda’s constant size and resource efficiency isn’t limited to right now, but will persist regardless of throughput or years of transaction history.

## About Testnet [Beta]

Cryptocurrency projects run the gamut from extremely gated and permissioned to entirely open source. We’ve always been focused on the latter. Since we open sourced our code in October 2018, everything we’ve built has been public.

This same philosophy informed our decision to open our testnet to public beta now. We’re opening ourselves as early as possible because we want to give our community a chance to get involved. Because it is so early, however, those who do wish to dive in should expect it to be a bit chaotic, messy, unpredictable, and almost certainly a ton of fun.

To get specific, we’re looking for:

1. Early adopters — We’re hoping to find those folks who love getting involved and learning about emerging technologies before they’re polished for the mainstream.
2. People who love a good challenge — SNARKs have incredible potential, but they still need a lot of work to realize that promise.
3. Long term thinkers — Building the foundations for a new economic system isn’t done overnight. We’re looking for people who are aligned with our mission and in it for the long term.

Speaking of fun, we’ll be adding some fun community incentives to get this network really humming. These include:

1. Weekly challenges — based on our goals, we’ll issue weekly challenges to organize behavior.
2. Testnet Points* — get recognized with points for achieving qualitative and quantitative objectives.
3. Leaderboard Spreadsheet — publicly track your points and ranking compared to other participants.

The team has published all the necessary documentation to help you get started. If you’re as excited about SNARKs as we are, or simply want to be involved in helping realize a truly decentralized financial system, we can’t wait to work with you.

Head on over to the Coda Discord server to get started, and let’s dive in!

*Testnet Points are designed solely to track contributions to the Testnet and Testnet Points have no cash or other monetary value. Testnet Points are not transferable and are not redeemable or exchangeable for any cryptocurrency or digital assets. We may at any time amend or eliminate Testnet Points.

Read more →

# Zero-knowledge Proofs: An Intuitive Explanation

### 2019-07-08

Zero-knowledge proofs (ZKPs) are a powerful cryptographic primitive that enables you to prove that you have a secret, without revealing it to anyone. If you are hearing about ZKPs for the first time, you are likely to say “Hah! That sounds impossible.” Read on to get an intuitive understanding of what they are. But first, some background. ZKPs were invented by Shafi Goldwasser, Silvio Micali, and Charles Rackoff in 1985. Ever since, ZKPs have been one of the most active areas of research in Cryptography. Moreover, recently, they are enjoying significant impact on real-world applications, specifically on blockchain technologies. Zcash, a pioneering blockchain project, employed ZKPs to achieve anonymity in financial transactions. At O(1)Labs, we are building CODA, the first succinct blockchain, using ZKPs. No matter how many transactions are recorded on the blockchain, the blockchain remains at most the size of a few tweets.

## What are Zero-knowledge Proofs?

The purpose of zero-knowledge proofs is to convince someone you know something without revealing what that thing is. For example, you might want to convince someone that you know the solution to a puzzle without giving them the solution.

Let’s look at a particular puzzle to see how you can accomplish exactly this.

### A Puzzle called 3-Coloring.

The 3-coloring puzzle can be described as follows. You are given a graph of nodes and edges (like in the figure below). The task is to find a “3-coloring” to the graph, which is a coloring of the nodes with three different colors in such a way that no two adjacent nodes have the same color.

### A Zero-knowledge Proof for the 3-coloring Puzzle.

We will construct a ZKP protocol for the 3-coloring puzzle. Before that, let’s quickly recall the two properties we are looking for in the protocol.

A ZKP protocol between you and someone else – call her Verifier, must satisfy the following properties:

• If you are cheating (i.e., if you do not know a 3-coloring), then Verifier should be able to catch you – this property is called soundness

• Verifier should not learn anything about the 3-coloring – this property is called zero knowledgeness

Now, let’s try to construct a ZKP protocol, where, your proving and Verifier’s verifying procedures are as follows.

Proving: Imagine the graph drawn on the floor in a closed space. Per the 3-coloring you know, you would place the corresponding colored balls on the nodes. Then you would completely cover the balls with inverted opaque bowls.

Verifying: Then, Verifier comes and points at an edge of her choice. You would lift up the two bowls on either side of the edge. Verifier verifies that the balls revealed are of different colors. If they are not, you are caught cheating.

Now that we’ve defined our protocol, we want to check that it satisfies zero-knowledgeness and soundness (the property that you can’t cheat).

## Zero-knowledgeness

Note that no information about the 3-coloring is revealed to Verifier, since whatever Verifier viewed could be simulated by just picking two random, but differently-colored balls. Thus, this protocol provides perfect zero knowledgeness.

## Soundness

If you didn’t really know a 3-coloring, then for some two nodes connected by an edge, you put balls of the same color during the proving stage. That means that during the verifying stage, since Verifier picks an edge at random, the probability that they catch you is at least $$1 / |E|$$, where $$|E|$$ is the number of edges in the graph.

Note that there is still some significant probability that you could cheat and still get away with it (specifically, $$(1 - 1/E)$$). This probability is called the soundness error. We would like to reduce this error to negligible. Here is an idea: Repeat the above protocol multiple times. Now, you can get away with cheating only if you get away in each one of those executions. This significantly reduces the soundness error, as quantified in the following.

### Soundness, More Rigorously

The more rounds you execute, higher is the Verifier’s confidence on the soundness of your claim. Let’s say that you do not know a 3-coloring for the graph. The probability that you will not get caught can be bounded as follows.

Let $$N$$ be the total number of rounds.
\Pr[\text{You will not be caught in Round }i \leq N] \leq 1 - \frac{1}{E}
\Pr[\text{You will not be caught in any of the N rounds}] \leq \left(1 - \frac{1}{E} \right)^N

## Revisiting Zero-knowledgeness

Unfortunately, there is an issue in the above protocol. Since Verifier gets to see the coloring of two nodes at a time, she can learn the entire 3-coloring by running enough rounds. Luckily, we can get around this issue: After every round, you would ask Verifier to step out of sight, you would randomly permute the colors and again cover all the nodes. That way, anything that Verifier might have learned in one round is not relevant in the subsequent rounds, since whatever Verifier viewed can be simulated by just picking two random, but differently-colored balls in each round.

Read more →

# Coda + Dekrypt: The SNARK Challenge

## A global competition to speed up the SNARK prover

Coda Protocol, CoinList, and DeKrypt are partnering to launch a $100k global public research challenge to dramatically speed up zk-SNARKs. We expect the contest to produce huge advancements in the state of the art. We’ll be contributing all of those advancements back to the public domain with a permissive open-source license that allows them to be used by the broader crypto ecosystem. SNARKs have been developing rapidly in the past few years, and in crypto they’re one of the most promising ways to bring scalability and privacy to blockchains. But in many ways we’re still at the surface of what’s possible. This gap between opportunity and current state is due to the significant increases in SNARK performance that remain to be unlocked. For example, while many of the computational steps involved in SNARK proving are perfectly parallelizable, this property hasn’t yet been fully implemented on hardware designed to exploit it, such as GPUs. In the open and collaborative spirit of the crypto space, we’re partnering with teams from around the ecosystem to improve zk-SNARKs and share the resulting research with the broader community. CoinList and DeKrypt will be co-organizers, and sponsors include the ZCash Foundation, Tezos, and Protocol Labs. In addition to the contest, we’ll be hosting regular office hours from top researchers and practitioners and sending out online tutorials. There will be tracks to improve zkSNARKs through both cryptographic improvements and optimizing existing implementations. Sign up here if you’d like to get involved. Read more → # New O(1) Labs Funding and Coda in 2019 ## Announcing a fundraise in O(1) Labs and new ways community members will be able to contribute to Coda ### by Evan Shapiro ### 2019-04-02 Tldr: We raised a new round of funding and over the next few months will be inviting users to become more deeply involved with Coda, a more accessible cryptocurrency. Scroll to the bottom to learn how you can participate. We’re excited to announce that last fall, we finished a new round of funding for O(1) Labs! We raised$15M from a group of fantastic investors including Accomplice, Coinbase Ventures, Paradigm, and General Catalyst. These excellent funds, along with a few other high value-add investors are joined by additional support from our amazing seed investors including MetaStable, PolyChain, Electric Capital, and others.

Since day 1, we have been focused on radically expanding the reach of cryptocurrencies by addressing the root issues behind blockchain’s scalability challenges. We believe our solution to this problem, Coda, will greatly expand the level of decentralization available to a cryptocurrency and the applications it is capable of. Our team of nearly 20 has executed aggressively, and today we are excited to share plans around what’s next on our path to Coda’s launch.

# Coda

When Bitcoin was created, it showed the world that it is possible to create any economic system we want, unbound by the historical limitations of nation states and large organizations.

Since then though, cryptocurrencies have run into a problem; as they have become more popular, control and usage has tended towards centralization. No matter what the chain, both consensus and usage has centralized towards large mining pools, delegated proof of stake, and trusted third party validation.

This is no coincidence. The fundamental technology underlying cryptocurrencies is their blockchains. As cryptocurrencies have grown in popularity, their blockchains have grown in size, in the process becoming unwieldy and forcing out participation from all except those with the capacity and willingness to dedicate serious compute resources to validating the chain. Even doing just a few transactions per second, the most popular cryptocurrencies have become hundreds of gigabytes in size. Some with more centralized consensus such as delegated proof of stake and unbounded throughputs have even reached terabytes.

Coda solves this problem. Leveraging zero knowledge proofs, it substitutes the traditional blockchain for a tiny, portable cryptographic proof, about the size of a few tweets. This means that anyone can get the same level of security as a standard blockchain with a tiny fraction of the requirements. Full verification of the chain is available to any device, and participating in consensus is highly accessible. And even better, as the proof is constant sized, using Coda stays inclusive and decentralized even at millions of users, thousands of transactions per second, and with decades of history.

Coda takes this a step farther, as well. By reducing the size of the chain to be so small, Coda can be used from websites without requiring an extension and from mobile phones with intermittent connectivity, enabling an experience where anyone has the option to fully use cryptocurrency applications without intermediaries.

Developers will be able to reach users simply by dropping in a <script> tag into their frontend and writing a few lines of code, without requiring users to download extensions or trust any third parties. By taking advantage of this, developers will be able to build new websites and applications impossible in today’s world. A social network can prove it’s treating your data and privacy with respect. New kinds of games can be built leveraging the capabilities of a cryptocurrency. Communities can organize and make decisions with fully verifiable elections.

Through its tiny blockchain, Coda will make it possible to easily develop wide-reaching applications while being governed and validated by its users. We hope this makes a step towards allowing people to have more access and usability from a cryptocurrency.

# Our progress

Over the course of the past year, our team has made rapid progress towards making this vision a reality. A few milestones:

1. In March of 2018, we released Snarky, our in-house programming language designed to make zk-SNARK programming expressive, efficient, and fun(ctional). It has since allowed us to construct one of the most sophisticated snark circuits in the world, which serves as a core of the protocol. (If you’re interested in getting your feet wet with snark programming, join our upcoming workshops in London and SF!)
2. In July of 2018 we completed development of a decentralized snark-proving marketplace, which will allow anyone on the network to contribute by helping compress the blockchain.
3. In September of 2018 we started running Coda’s first testnet running the world’s first succinct blockchain, and released a demo showing how Coda enables full-node level security in the browser.
4. In October of 2018, we proudly open-sourced our protocol work. You can continue following our open-source progress on our Github page.
5. In March of 2019 we completed our implementation of a formally secure proof of stake system (Ouroboros Praos) running inside of a zero knowledge proof.

# How you can participate

Over the next few months, we’ll be inviting users to become more deeply involved with Coda. We believe that there is a direct correlation between the strength of a protocol and how much agency and leadership its community has to shape its direction.

With Coda, there will be numerous ways for both technical and non-technical builders to get involved. We’ll be working diligently to distribute many of the key roles of running and growing of Coda to these early contributors. We’re excited to take these next steps with you, so please sign up for any (or all) of the following if you’d be interested in joining in.

To build and expand on the technical foundations of Coda

To participate in running, securing and sharing the protocol

Finally - stay tuned for more posts on our plans to give governance of Coda over to the community, and what applications we will be building to showcase the capabilities of Coda.

Read more →

# A SNARKy Exponential Function

## Simulating real numbers using finite field arithmetic

### 2019-03-09

In Coda, we use the proof-of-stake protocol Ouroboros for consensus. Naturally since this is Coda, that means we have to check proofs-of-stake inside a SNARK.

A proof-of-stake for a person with some amount of stake $$a$$ in Ouroboros is a random number $$s$$ between 0 and 1 (which one provably has generated fairly) such that $$s$$ is less than some threshold depending on $$a$$. Concretely, that threshold is $$1 - (1/2)^{\frac{a}{T}}$$ where $$T$$ is the total amount of stake in the system.

It’s important to use a threshold of this form because it means that the density of blocks over time does not depend on the distribution of stake.

If you know anything about SNARKs, you know that inside of a SNARK all we can do is arithmetic (that is, addition and multiplication) in a finite field $$F_p$$. It’s not at all clear how we can compute a fractional number raised to a fractional power!

We’ll go through a very cool technique for computing such a thing. All the code for doing what we’ll talk about is implemented using snarky and can be found here.

The technique will go as follows:

1. We’ll use Taylor series to approximately reduce the problem to doing arithmetic with real numbers.
2. We’ll approximate the arithmetic of real numbers using the arithmetic of rational numbers.
3. We’ll then approximate the arithmetic of rational numbers using integer arithmetic.
4. Finally, we’ll simulate integer arithmetic using finite field arithmetic.

## Taylor series

First we need a way to reduce the problem of computing an exponentiation to multiplications and additions in a finite field. As a first step, calculus lets us reduce exponentiation to multiplication and addition over the real numbers (a field, but not a finite one) using a Taylor series.

Specifically, we know that

\begin{aligned}
1 - (1/2)^x
&= -\log(1/2) x - \frac{(\log(1/2) x)^2}{2!} - \frac{(\log(1/2) x)^3}{3!} - \dots \\
&= \log(2) x - \frac{(\log(2) x)^2}{2!} + \frac{(\log(2) x)^3}{3!} - \dots
\end{aligned}
We can truncate this Taylor series to get polynomials $$T_n$$
\begin{aligned}
T_n
&= \log(2) x - \frac{(\log(2) x)^2}{2!} + \dots + \frac{(\log(2) x)^n}{n!} \\
&= \log(2) x - \frac{\log(2)^2}{2!} x^2 + \dots + \frac{\log(2)^n}{n!} x^n
\end{aligned}

by taking the first $$n$$ terms. The Taylor polynomials nearly compute $$1 - (1/2)^x$$, but with some error that gets smaller as you take more and more terms. You can see this in the image of the actual function (in blue) along with the first few Taylor polynomials (in black).

It turns out there’s a handy formula which lets us figure out how many terms we need to take to make sure we get the first $$k$$ bits of the output correct, so we can just use that and truncate at the appropriate point for the amount of precision that we want.

## From reals to rationals

Multiplication and addition are continuous, which means if you change the inputs by only a little bit, the outputs change by only a little bit.

Explicitly, if instead of computing $$x_1 + x_2$$, we compute $$a_1 + a_2$$ where $$a_i$$ is $$x_i$$ plus some small error $$e_i$$, we have $$a_1 + a_2 = (x_1 + e_1) + (x_2 + e_2) = (x_1 + x_2) + (e_1 + e_2)$$ so the result is close to $$x_1 + x_2$$ (since $$e_1 + e_2$$ is small).

Similarly for multiplication we have $$a1 a2 = (x1 + e1)(x2 + e2) = x1 x2 + e1 x1 + e2 x2 + e1 e2$$. If $$e1, e2$$ are small enough compared to $$x_1$$ and $$x_2$$, then $$e1 x1 + e2 x2 + e1 e2$$ will be small as well, and so $$a1 a2$$ will be close to $$x1 x2$$.

What this means is that instead of computing the Taylor polynomial $\log(2) x - \frac{\log(2)^2}{2!} x^2 + \dots + \frac{\log(2)^n}{n!} x^n$

using real numbers like $$\log(2)$$ (which is irrational), we can approximate each coefficient $$\log(2)^k / k!$$ with a nearby rational number and compute using those instead! By continuity, we’re guaranteed that the result will be close to the actual value (and we can quantify exactly how close if we want to).

## From rationals to integers

There are a few ways to approximate rational arithmetic using integer arithmetic, some of which are more efficient than others.

The first is to use arbitrary rational numbers and simulate rational arithmetic exactly. We know that $\frac{a}{b} + \frac{c}{d} = \frac{a d + b c}{b d}$ so if we represent rationals using pairs of integers, we can simulate rational arithmetic perfectly. However, there is a bit of an issue with this approach, which is that the integers involved get really huge really quickly when you add numbers together. For example, $$1/2 + 1/3 + \dots + 1/n$$ has $$n!$$ as its denominator, which is a large number.

That’s a problem for us inside the SNARK, because we’re working with a finite field and want to make sure there is no overflow.

A better approach is to use rational numbers whose denominator is a power of two. These numbers are called dyadic rationals and are basically the same thing as floating point numbers.

Here, addition can be simulated using integers as follows. A rational number $$\frac{a}{2^k}$$ is represented as the pair $$(a, k)$$. Say $$k \leq m$$. For addition, we have $\frac{a}{2^k} + \frac{b}{2^m} = \frac{2^{m - k} a + b}{2^m}$ so the denominator of a sum is the max of the denominators of the inputs. That means that the denominators don’t get huge when you add a bunch of numbers (they will stay as big as the largest input to the sum).

Moreover, any rational number can be approximated by a number of this form (it’s just the binary expansion of that number).

## From integers to a finite field

To recap, we’ve done the following.

1. We approximated our exponential $$1 - (1/2)^x$$ by arithmetic over the reals using Taylor polynomials.
2. We approximated real arithmetic by rational arithmetic using continuity.
3. We approximated rational arithmetic by the arithmetic of dyadic-rationals/floats, again using continuity. Moreover, we saw that floating point arithmetic can easily be simulated exactly using integer arithmetic.

Now our final step is to simulate integer arithmetic using the arithmetic of our prime order field $$\mathbb{F}_p$$. But this step is actually the easiest! As long as we are careful about numbers not overflowing, it’s the same thing. That is, if we know ahead of time that $$a + b < p$$, then we can safely compute $$a + b \mod p$$, knowing that the result will be the same as over the integers. The same is true for multiplication. So as long as we don’t do too many multiplications, the integers representing the dyadic rationals we’re computing with won’t get too big, and so we will be okay. And if we don’t take too many terms of the Taylor series, this will be the case.

## Conclusion

Let’s survey the net result of all this approximation: We have a very efficient way of approximately computing an exponential function on fractional inputs inside of a SNARK, in such a way that we can have concrete bounds on the error of the approximation. Pretty cool! This enables us to use a threshold function for Ouroboros that guarantees a constant density of blocks regardless of the distribution of stake.

Read more →

# Fast Accumulation on Streams

## Succinctly Verifying Coda’s Ledger

### 2018-12-20

1 If you’d rather consume this content in video form, watch this talk.

While developing Coda, we came across an interesting problem that uncovered a much more general and potentially widely applicable problem: Taking advantage of parallelism when combining a large amount of data streaming in over time. We were able to come up with a solution that scales up to any throughput optimally while simultaneously minimizing latency and space usage. We’re sharing our results with the hope that others dealing with manipulation of online data streams will find them interesting and applicable.1

## Background

2 Equivalent to security as a full node.

The Coda cryptocurrency protocol is unique in that it uses a succinct blockchain. In Coda the blockchain is replaced by a tiny constant-sized cryptographic proof. This means that in the Coda protocol a user can sync with full-security2 instantly—users don’t have to wait to download thousands and thousands of blocks to verify the state of the network.

What is this tiny cryptographic proof? It’s called a zk-SNARK, or zero knowledge Succinct Non-interactive ARgument of Knowledge. zk-SNARKs let a program create a proof of a computation, then share that proof with anyone. Anyone with the proof can verify the computation very quickly, in just milliseconds, independent of how long the computation itself takes. While validating proofs is fast, creating them is quite slow, so creating this SNARK proof is much more computationally expensive. We use a few different SNARK proofs throughout Coda’s protocol, but the important one for this post is what we call the “Ledger Proof”.

3 Note that we represent account states concretely as their hashes for performance reasons.

A ledger proof tells us that given some starting account state $\sigma_0$ there was a series of $k$ transactions that eventually put us into account state $\sigma_k$. Let’s refer to such a proof as $\sigma_0 \Longrightarrow \sigma_k$.3 So what does it mean for a single transaction to be valid? A transaction, $T_i^{i+1}$, is valid if it’s been signed by the sender, and the sender had sufficient balance in their account. As a result our account state $\sigma_i$ transitions to some new state $\sigma_{i+1}$. This state transition can be represented as $\sigma_i T_{i}^{i+1} \sigma_{i+1}$. We could recompute $\sigma_0 \Longrightarrow \sigma_k$ every time there is a new transaction, but that would be slow, with the cost of generating the proof growing with the number of transactions—instead we can reuse the previous proof recursively. These ledger proofs enable users of Coda to be sure that the ledger has been computed correctly and play a part in consensus state verification.

More precisely, the recursive bit of our ledger proof, $\sigma_0 \Longrightarrow \sigma_{i}$, or the account state, has transitioned from the starting state $\sigma_0$ to the current state $\sigma_i$ after $i$ correct transactions are applied, could naively be defined in the following way:

There exists a proof, $\sigma_0 \Longrightarrow \sigma_{i-1}$, and $\sigma_{i-1}T_{i-1}^{i}\sigma_i$ such that $\sigma_0 \Longrightarrow \sigma_{i-1}$ verifies and $\sigma_{i-1}T_{i-1}^{i}\sigma_i$ is valid.

Let’s examine what running this process over four steps would look like:

The functional programming enthusiast will notice that this operation is like a scan:

The ~init in OCaml refers to a named argument, and 'a and 'b are a type unification variables that work similarly to generics in Java.

(* scan [1;2;3] ~init:0
~f:(fun b a -> b + a)
=> [1,3,6] *)
val scan : 'a list
-> ~init:'b
-> ~f:('b -> 'a -> 'b)
-> 'b list
(* scan [1;2;3] ~init:0 ~f:(fun b a -> b + a)
=> [1,3,6] *)
val scan : 'a list -> ~init:'b
-> ~f:('b -> 'a -> 'b) -> 'b list

4

A scan combines elements of a collection together incrementally and returns all intermediate values. For example if our elements are numbers and our operation is plus, scan [1;2;3] ~init:0 ~f:(fun b a → b + a) has following evaluation trace:

:: means “cons” or prepend to the front of a linked list.

scan [1;2;3] ~init:0 ~f:add
(0+1)::(scan [2;3] ~init:(0+1) ~f:add)
1::(scan [2;3] ~init:1 ~f:add)
1::(1+2)::(scan [3] ~init:(1+2) ~f:add)
1::3::(scan [3] ~init:3 ~f:add)
1::3::(3+3)::(scan [] ~init:(3+3) ~f:add)
1::3::6::(scan [] ~init:6 ~f:add)
1::3::6::[]
[1;3;6]

5

However, what we really have is a scan operation over some sort of stream of incoming information, not a list. A signature in OCaml may look like this:

val scan : 'a Stream.t
-> ~init:'b
-> ~f:('b -> 'a -> 'b)
-> 'b Stream.t

6 We write streams as lists in the evaluation.

As new information flows into the stream we combine it with the last piece of computed information and emit that result onto a new stream. Here’s a trace with transactions and proofs6:

\begin{aligned}
scan &[\sigma_0T_0^{1}\sigma^{1}; \; \sigma_1T_1^{2}\sigma^{2}; \; \sigma_2T_2^{3}\sigma^{3}] \\
\sim &init:\sigma_0 \Longrightarrow \sigma_0 \\
\sim &f:combine \\
\\

combine&(\sigma_0 \Longrightarrow \sigma_0,\sigma_0T_0^{1}\sigma_1):: \\
(scan &[\sigma_1T_1^{2}\sigma^{2}; \; \sigma_2T_2^{3}\sigma^{3}] \\
\sim &init:\\
&combine(\sigma_0 \Longrightarrow \sigma_0,\sigma_0T_0^{1}\sigma_1)  \\
\sim &f:combine) \\
\\

\sigma_0 \Longrightarrow \sigma_1&:: \\
(scan &[\sigma_1T_1^{2}\sigma^{2}; \; \sigma_2T_2^{3}\sigma^{3}] \\
\sim &init:\sigma_0 \Longrightarrow \sigma_1 \\
\sim &f:combine) \\
\\

\sigma_0 \Longrightarrow \sigma_2&::\sigma_0 \Longrightarrow \sigma_1:: \\
(scan &[\sigma_2T_2^{3}\sigma_3] \\
\sim &init:\sigma_0 \Longrightarrow \sigma_2 \\
\sim &f:combine) \\
\\

\sigma_0 \Longrightarrow \sigma_3&::\sigma_0 \Longrightarrow \sigma_2::\sigma_0 \Longrightarrow \sigma_1:: \\
(scan &[] \\
\sim &init:\sigma_0 \Longrightarrow \sigma_3 \\
\sim &f:combine) \\
\\

\sigma_0 \Longrightarrow \sigma_3&::\sigma_0 \Longrightarrow \sigma_2::\sigma_0 \Longrightarrow \sigma_1::[] \\
\\

[\sigma_0 \Longrightarrow \sigma_3&; \; \sigma_0 \Longrightarrow \sigma_2; \; \sigma_0 \Longrightarrow \sigma_1]
\end{aligned}
\begin{aligned}
scan &[\sigma_0T_0^{1}\sigma^{1}; \; \sigma_1T_1^{2}\sigma^{2}; \; \sigma_2T_2^{3}\sigma^{3}] \\
\sim &init:\sigma_0 \Longrightarrow \sigma_0 \quad\sim f:combine \\
\\

combine&(\sigma_0 \Longrightarrow \sigma_0,\sigma_0T_0^{1}\sigma_1):: \\
(scan &[\sigma_1T_1^{2}\sigma^{2}; \; \sigma_2T_2^{3}\sigma^{3}] \\
\sim &init:combine(\sigma_0 \Longrightarrow \sigma_0,\sigma_0T_0^{1}\sigma_1)  \\
\sim &f:combine) \\
\\

\sigma_0 \Longrightarrow \sigma_1&:: \\
(scan &[\sigma_1T_1^{2}\sigma^{2}; \; \sigma_2T_2^{3}\sigma^{3}] \\
\sim &init:\sigma_0 \Longrightarrow \sigma_1 \quad\sim f:combine) \\
\\

\sigma_0 \Longrightarrow \sigma_2&::\sigma_0 \Longrightarrow \sigma_1:: \\
(scan &[\sigma_2T_2^{3}\sigma_3] \\
\sim &init:\sigma_0 \Longrightarrow \sigma_2 \quad\sim f:combine) \\
\\

\sigma_0 \Longrightarrow \sigma_3&::\sigma_0 \Longrightarrow \sigma_2::\sigma_0 \Longrightarrow \sigma_1:: \\
(scan &[] \quad\sim init:\sigma_0 \Longrightarrow \sigma_3 \quad\sim f:combine) \\
\\

\sigma_0 \Longrightarrow \sigma_3&::\sigma_0 \Longrightarrow \sigma_2::\sigma_0 \Longrightarrow \sigma_1::[] \\
\\

[\sigma_0 \Longrightarrow \sigma_3&; \; \sigma_0 \Longrightarrow \sigma_2; \; \sigma_0 \Longrightarrow \sigma_1]
\end{aligned}
\begin{aligned}
&scan [\sigma_0T_0^{1}\sigma^{1}; \; \sigma_1T_1^{2}\sigma^{2}; \; \sigma_2T_2^{3}\sigma^{3}] \quad\sim init:\sigma_0 \Longrightarrow \sigma_0 \quad\sim f:combine \\
\\

&combine(\sigma_0 \Longrightarrow \sigma_0,\sigma_0T_0^{1}\sigma_1):: \\
&\quad(scan [\sigma_1T_1^{2}\sigma^{2}; \; \sigma_2T_2^{3}\sigma^{3}] \quad\sim init:combine(\sigma_0 \Longrightarrow \sigma_0,\sigma_0T_0^{1}\sigma_1) \quad\sim f:combine) \\
\\

&\sigma_0 \Longrightarrow \sigma_1:: (scan [\sigma_1T_1^{2}\sigma^{2}; \; \sigma_2T_2^{3}\sigma^{3}] \quad\sim init:\sigma_0 \Longrightarrow \sigma_1 \quad\sim f:combine) \\
\\

&\sigma_0 \Longrightarrow \sigma_2::\sigma_0 \Longrightarrow \sigma_1:: (scan [\sigma_2T_2^{3}\sigma_3] \quad\sim init:\sigma_0 \Longrightarrow \sigma_2 \quad\sim f:combine) \\
\\

&\sigma_0 \Longrightarrow \sigma_3::\sigma_0 \Longrightarrow \sigma_2::\sigma_0 \Longrightarrow \sigma_1:: (scan [] \quad\sim init:\sigma_0 \Longrightarrow \sigma_3 \quad\sim f:combine) \\
\\

&\sigma_0 \Longrightarrow \sigma_3::\sigma_0 \Longrightarrow \sigma_2::\sigma_0 \Longrightarrow \sigma_1::[] \\
\\

&[\sigma_0 \Longrightarrow \sigma_3; \; \sigma_0 \Longrightarrow \sigma_2; \; \sigma_0 \Longrightarrow \sigma_1]
\end{aligned}

Unfortunately, we have a serial dependency of proof construction here: you must have $\sigma_0 \Longrightarrow \sigma_i$ before getting $\sigma_0 \Longrightarrow \sigma_{i+1}$. This is very slow. When using Libsnark it takes ~20 seconds to do one of these steps on an 8 core cloud instance, and that’s just for a single transaction. This translates to merely 3 transactions per minute globally on the network!

What we’ll do in this blog post is find a better scan. A scan that maximizes throughput, doesn’t incur too much latency, and doesn’t require too much intermediate state. A scan that takes advantage of properties of the zk-SNARK primitives we have. We’ll do this by iterating on our design until we get something that best meets our requirements. Finally, we’ll talk about a few other potential use cases for such a scan outside of cryptocurrency.

## Requirements

Now that we understand the root problem, let’s talk about requirements to help guide us toward the best solution for this problem. We want to optimize our scan for the following features:

1. Maximize transaction throughput

Transaction throughput here refers to the rate at which transactions can be processed and validated in the Coda protocol network. Coda strives to be able to support low transaction fees and more simultaneous users on the network, so this is our highest priority.

1. Minimize transaction latency

7 The more we sacrifice latency the longer proposer nodes have to keep around full copies of the state before just relying on the small SNARK itself.

It’s important to minimize transaction latency to enter our SNARK to keep the low RAM requirements on proposer nodes, nodes that propose new transitions during Proof of Stake.7 SNARKing a transaction is not the same as knowing a transaction has been processed, so this is certainly less important for us than throughput.

1. Minimize size of state

Again, to keep low RAM requirements on proposer nodes we want to minimize the amount of data we need to represent one state.

And moreover, this is the order of importance of these goals from most to least important: Maximize throughput, minimize latency, minimize size of state.

## Properties

We’ll start with some assumptions:

8 This is possible because of a cryptographic notion known as “Signature of Knowledge” which lets us embed information about the creator and a fee into the proof in a way that is unforgeable. We will talk more about how we use this information in another blog post.

• All SNARK proofs take one unit of time to complete
• Transactions arrive into the system at a constant rate $R$ per unit time
• We effectively have any number of cores we need to process transactions because we can economically incentivize actors to perform SNARK proofs and use transaction fees to pay those actors.8
• Two proofs can be recursively merged:

This merge operation is associative:

So we can actually write transaction SNARKs that effectively prove the following statements:

Base ($\sigma_i \Longrightarrow \sigma_{i+1}$)

There exists $\sigma_iT_i^{i+1}\sigma_{i+1}$ such that the transaction is valid

Merge ($\sigma_i \Longrightarrow \sigma_k$)

There exists $\sigma_i \Longrightarrow \sigma_j$ and $\sigma_j \Longrightarrow \sigma_k$ such that both proofs verify

Before we go any further, though, let’s abstract away some details here.

## Abstractions

Data:

$D_i := \; \; \sigma_iT_i^{i+1}\sigma_{i+1}$

Base work:

$B_i := \; \; \sigma_i \Longrightarrow \sigma_{i+1}$

Merge work:

$M_{ij} := \; \; \sigma_i \Longrightarrow \sigma_j$

Accumulated value:

$A_k := \; \; \sigma_0 \Longrightarrow \sigma_k$

Let’s say that data effectively enqueues a “Base job” that can be completed to become “Base work”. Similarly, two “Base work”s (or two “Merge works”s) can be combined in a “Merge job” to create “Merge work”.

## Initial Analysis

### Upper Bound

Let’s set an upper bound efficiency target for any sort of scan. No matter what we do we can’t do better than the following:

• Throughput: $R$ per unit time

We said new data was entering the system at a rate of $R$ per unit time, so the best we can do is complete the work as soon as it’s added.

• Latency: $O(1)$

In the best case, we don’t have to wait to get the piece of data included as part of the scan result. Whatever time it takes to do one step is the time it takes before our data is included in the scan result.

• Space: $O(1)$

We don’t need to store any extra information besides the most recent result.

As a reminder, we decided that the naive approach is just a standard linear scan. This “dumb scan” can be a nice lower bound on throughput, we can also analyze the other attributes we care about here:

### Linear Scan

• Throughput: $1$ per unit time

Our linear scan operation emits a result at every step and so we need the prior result before we can perform the next step.

• Latency: $O(1)$

Every step emits a single result based on the data

• Space: $O(1)$

We only have to hold on to the most recently accumulated result to combine with the next value.

Since our primary goal is to maximize throughput, it’s clear a linear scan isn’t appropriate.

## Parallel Periodic Scan

Recall that the merge operation is associative. This means that we can choose to evaluate more than one merge at the same time, thus giving us parallelism! Even though data are coming in only $R$ at a time, we can choose to hold more back to unlock parallel merge work later. Because we effectively have infinite cores we can get a massive speedup by doing work in parallel.

This gives rise to the notion of a “periodic scan”:

(* periodicScan 1->2->3->4->5->6->7->8
~init:0
~lift:(fun a -> a)
~merge:(fun a b -> a + b) =>
10->36
*)
val periodicScan :
'a Stream.t ->
~init:'b ->
~lift:('a -> 'b) ->
~merge:('b -> 'b -> 'b) ->
'b Stream.t
(* periodicScan 1->2->3->4->5->6->7->8
~init:0 ~lift:(fun a -> a)
~merge:(fun a b -> a + b) => 10->36
*)
val periodicScan : 'a Stream.t -> ~init:'b ->
~lift:('a -> 'b) -> ~merge:('b -> 'b -> 'b) -> 'b Stream.t

A scan that periodically emits complete values, not every time an 'a datum appears on a stream, but maybe every few times. This therefore has slightly different semantics than a traditional scan operation.

Rather than returning a stream emitting 1→3→6→10→15→21→28→36, we buffer data elements 1 through 4 and compute with those in parallel, and only emit the resulting sum, 10, when we’re done. Likewise we buffer 5 through 8, and combine that with 10 and emit that 36 when we’re done. We periodically emit intermediate results instead of doing so every time.

## Naive Implementation of Periodic Scan

Let’s go over this tree construction step-by-step, considering what happens to our data over time as it’s coming through into the system. Let’s consider $R = 8$.

First we gather $R$ pieces of data and enqueue $R$ Base jobs for our network to complete. We use $R$ of our cores and can complete all jobs in one time step. We hold back the data on the pipe, and we are forced to buffer it because we haven’t finished handling the first $R$.

As we add Base work, we give way for a series of Merge jobs that can be completed in the next step:

Now we have $\frac{R}{2}$ pieces of merge work to complete and we use $\frac{R}{2}$ cores and complete them in one time step.

We repeat until we reach the top of the tree. The completed Merge work at the top can be consumed by the rest of the system.

## Analysis

• Throughput: $\frac{R}{log(R)}$

Every $log(R)$ steps, we have the opportunity to consume $R$ more pieces of data.

• Latency: $O(log(R))$

It takes $log(R)$ time steps before we emit our top-level merge work as we half the nodes in each layer of our tree at each step.

• Space: $O(R)$

We now have to keep parts of a tree around at each step. Since our trees have $R$ leaves, typical binary trees have $2R-1$ nodes when completed, and we have an extra layer, we actually use $3R-1$ nodes.

### Naive Periodic Scan

For the purposes of visualization, unit time is being replaced with 60 seconds. We assume the space of a single node in the tree is 2KB.

Throughput (in data per second) Latency (in seconds) Space (in bytes)
$R=4$ 0.0333 180 ~22KB
$R=16$ 0.0667 300 ~94KB
$R=1024$ 1.71 660 ~6MB
$R=16384$ 19.5 900 ~98MB
Throughput (in data per second) Latency (in seconds) Space (in bytes)
$R=4$ 0.0333 180 ~22KB
$R=16$ 0.0667 300 ~94KB
$R=1024$ 1.71 660 ~6MB
$R=16384$ 19.5 900 ~98MB

### Serial Scan

Throughput (in data per second) Latency (in seconds) Space (in bytes)
$R=4$ 0.05 20 ~2KB
$R=16$ 0.05 20 ~2KB
$R=1024$ 0.05 20 ~2KB
$R=16384$ 0.05 20 ~2KB
Throughput (in data per second) Latency (in seconds) Space (in bytes)
$R=4$ 0.05 20 ~2KB
$R=16$ 0.05 20 ~2KB
$R=1024$ 0.05 20 ~2KB
$R=16384$ 0.05 20 ~2KB

We have increased throughput at the cost of some latency and space when compared with the serial approach, so this is a little bit better!

However, this solution leaves something to be desired—why must we halve our parallelism as we walk up each layer of the tree? We have a stream feeding us $R$ data values every unit of time, so we should have enough work to do. Shouldn’t we use this somehow?

## Better Solution

Let’s take advantage of the fact that we get $R$ new data values each time we complete work—still preferring earlier queued data values to minimize latency once we’ve exhausted available parallelism.

With this in mind, let’s trace a run-through, this time always making sure we have $R$ pieces of work to do at every step—for illustration, let’s pick $R=2$:

We do as we did before, but this time we have $R$ jobs to complete and can dispatch to our $R$ cores every step. We have exactly $log(R)$ trees pending at a time. At every step, we complete the first tree (tree zero) and at tree $i$, we complete layer $i$.

## Analysis

• Throughput: $R$

Throughput of work completion matches our stream of data! It’s perfect, we’ve hit our upper-bound.

• Latency: $O(log(R))$

9 Here’s a short informal proof: Note that any sort of reduction operation on $N$ pieces of data can’t be done faster than $O(log(N))$ span. If we assume we could handle our $R$ units that we enqueue at a time in fewer than $O(log(N))$ steps then since we’re doing a reduction operation we would be doing it faster than $O(log(N))$ which is a contradiction.

Latency is still logarithmic, though now it’s $log(R)+1$ steps as our trees have $R$ leaves and we an extra layer on the bottom for base jobs. In fact, this is actually the lower bound.9

• Space: $O(R*log(R))$

We have multiple trees now. Interestingly, we have exactly $log(R)$ trees pending at a time. Again our longer trees take up an extra layer than traditional binary trees, so in this case $3R-1$ nodes since we have $R$ leaves, and we have $log(R)$ of these trees.10

Now that we have thoroughly optimized our throughput and latency, let’s optimize for space.

## Optimize size

10 In order to prevent latency and space from growing over time, we need to make sure we complete work as fast as we add it.

Do we really need to hold all $log(R)$ trees? We only ever care about the frontier of work. All the information we need to perform the next layer of jobs. We clearly don’t need to store anything above that or below it in the trees.

Notice that we only use some of each layer of trees even across the $log(R)$ trees. And so we can represent the frontier of the $log(R)$ trees with only a single tree representing the work pipeline moving from leaves to the root in the following manner:

## Analysis

• Throughput: $R$

Throughput is the same as before.

• Latency: $O(log(R))$

Latency is the same as above.

• Space: $O(R)$

We’ve reduced our space back down to a single tree with leaves $3R-1$.

## Space Optimization

Do we really need that extra layer? If we change how we think about the problem, we can use a perfect binary tree which we can manipulate to save even more space:

Now we’re down to $2R-1$ nodes—a standard binary tree with $R$ leaves.

How do we store the tree? Since we know the size a priori (a complete binary tree with $R$ leaves), we can use a succinct representation.

11 This is a very interesting area of computer science research, and I very much recommend the curious to read more: See Zhou, et. al 2013 and wavelet trees.

12 In our case, just the cursor.

A succinct data structure requires only $o(Z)$ extra space to manage the relationship between the elements if $Z$ is the optimal number of bits that we need to express the information in an unstructured manner. Note that this is little-$o$ not big-$O$—a much tighter bound.11

In fact our structure as described is actually an implicit one because of our scalar cursor. An implicit data structure is one that uses only $O(1)$ extra bits.12 In later refinements (in part 2), we’ll go back to a succinct representation because we need to relax one of the assumptions we made here. This is similar to the popular implicit heap that you may have learned about in a computer science class.

## Final Analysis

• Throughput: $R$

Throughput keeps up with production rate $R$, so we couldn’t do better.

• Latency: $O(log(R))$

Latency is proportional to $log(R)$ steps, as we described earlier, so we don’t get hurt too badly there.

• Space: $2R-1 + O(1)$

We have an implicit data structure representation for our complete binary tree with $2R$ leaves as described above.

### Fully Optimized Scan

Throughput (in data per second) Latency (in seconds) Space (in bytes)
$R=4$ 0.0667 180 ~22KB
$R=16$ 0.267 300 ~94KB
$R=1024$ 17.1 660 ~6MB
$R=16384$ 273 900 ~98MB
$R=65536$ 1092 1020 ~393MB
Throughput (in data per second) Latency (in seconds) Space (in bytes)
$R=4$ 0.0667 180 ~22KB
$R=16$ 0.267 300 ~94KB
$R=1024$ 17.1 660 ~6MB
$R=16384$ 273 900 ~98MB
$R=65536$ 1092 1020 ~393MB

We went from a sequential solution that at $R=16384$ only handled a throughput of 0.05 data per second to an initial parallel solution that handled 19.5 data per second to a fully optimized solution that handles 273 data per second. Our final solution even has optimal latency and space characteristics.

We did it! Coda can now be limited in its throughput by the speed at which information can flow across the network, and no longer by the time it takes to construct a SNARK. Moreover, we solved a more general problem: Efficiently computing an online periodic parallel scan over an infinite stream for some associative operation.

## Other Use Cases

Deep space telescopes produce an astronomical amount of data per second. For example, the Square Kilometre Array will process petabits of data per second. If data frames are coming in faster than we can process them which is certainly true for some types of workloads like non-parametric machine learning, we can use this data structure to handle these streams.

More generally, certain map-reduce type workloads that act in an online fashion (on an infinite stream of inputs instead of a finite collection) with expensive operators, could benefit from using our same data structure.

You can also go through literature and try to find prior art. We didn’t find much searching through map-reduce papers. The only thing that was a bit related is a paper from the GPU programming world, but doesn’t address the infinite streaming bit. Please leave a comment if you want to share any related work.

## Conclusion

We were able to take advantage of parallelism and other properties of our system to materialize this general “periodic scan” problem of combining data streaming in online fashion which as we described doesn’t limit throughput at all, has optimal latency characteristics, and is succinct. With this data structure, Coda is free to take advantage of succinctness to offer a high-throughput with no risk of centralization!

In a future blog post, we’ll talk about instantiating this parametric structure with concrete parameters and how we instantiate our infinite core machine model by farming work out to the network. We’ll also talk about the optimization problem we have for choosing how to fill out these trees with completed work.

If you like this sort of stuff, we’re looking for open source contributors and hiring.

## Future work

We’ll explore modifying this structure to optimize latency in the presence of variable throughput. You can imagine that if we detect input data throughput becomes sufficiently slow we can remove a layer from the next virtual tree, and if it’s too fast we can add one. We haven’t yet explored how this will affect the further refinements we made on top of the virtual trees.

Additionally, we will want to explore a more efficient mechanism to share account states that are part of the scan tree to nodes that don’t care about the in-progress proofs, so that bandwidth-constrained nodes can still look up their most recent account states without waiting for a ledger proof to pop out of the tree.

## Appendix

We can reify this model with the following signature in the Coda codebase:

'a is the type of the top value and there’s some notion of an associative merging operation on the 'a values. 'd is the type of the data at the leaves that comes in at rate $R$.

val start : parallelism_log_2:int
-> ('a, 'd) State.t
(** The initial state of the parallel
scan at some parallelism *)

val next_jobs : state:('a, 'd) State.t
-> ('a, 'd) Available_job.t list
(** Get all the available jobs *)

val enqueue_data : state:('a, 'd) State.t
-> data:'d list
-> unit Or_error.t
(** Add data to parallel scan state *)

val free_space : state:('a, 'd) State.t
-> int
(** Compute how much data ['d] elements we
are allowed to add to the state *)

val fill_in_completed_jobs :
state:('a, 'd) State.t
-> completed_jobs:
'a State.Completed_job.t list
-> 'a option Or_error.t
(** Complete jobs needed at this state --
optionally emits the ['a] at the top of
the tree *)
val start : parallelism_log_2:int -> ('a, 'd) State.t
(** The initial state of the parallel scan at some parallelism *)

val next_jobs : state:('a, 'd) State.t -> ('a, 'd) Available_job.t list
(** Get all the available jobs *)

val enqueue_data : state:('a, 'd) State.t -> data:'d list -> unit Or_error.t
(** Add data to parallel scan state *)

val free_space : state:('a, 'd) State.t -> int
(** Compute how much data ['d] elements we are allowed to add to the state *)

val fill_in_completed_jobs :
state:('a, 'd) State.t
-> completed_jobs:'a State.Completed_job.t list
-> 'a option Or_error.t
(** Complete jobs needed at this state -- optionally emits the ['a] at the top
of the tree *)

13

## Acknowledgements

Thanks to Evan Shapiro for working through these data structures with me when we were first figuring this stuff out. Thanks to Deepthi Kumar for collaborating with me on several optimizations. Thanks to Laxman Dhulipala for taking a video call early on and helping with an early core insight about the multiple trees. Finally, thanks to Omer Zach and Corey Richardson (and Evan and Deepthi) for their very thorough feedback on drafts of this post!

1. If you’d rather consume this content in video form, watch this talk.

2. Equivalent to security as a full node.

3. Note that we represent account states concretely as their hashes for performance reasons.

4. The ~init in OCaml refers to a named argument, and 'a and 'b are a type unification variables that work similarly to generics in Java.

5. :: means “cons” or prepend to the front of a linked list.

6. We write streams as lists in the evaluation.

7. The more we sacrifice latency the longer proposer nodes have to keep around full copies of the state before just relying on the small SNARK itself.

8. This is possible because of a cryptographic notion known as “Signature of Knowledge” which lets us embed information about the creator and a fee into the proof in a way that is unforgeable. We will talk more about how we use this information in another blog post.

9. Here’s a short informal proof: Note that any sort of reduction operation on $N$ pieces of data can’t be done faster than $O(log(N))$ span. If we assume we could handle our $R$ units that we enqueue at a time in fewer than $O(log(N))$ steps then since we’re doing a reduction operation we would be doing it faster than $O(log(N))$ which is a contradiction.

10. In order to prevent latency and space from growing over time, we need to make sure we complete work as fast as we add it.

11. This is a very interesting area of computer science research, and I very much recommend the curious to read more: See Zhou, et. al 2013 and wavelet trees.

12. In our case, just the cursor.

13. 'a is the type of the top value and there’s some notion of an associative merging operation on the 'a values. 'd is the type of the data at the leaves that comes in at rate $R$.

Read more →

# Functional programming + crypto reading list

### 2018-04-01

Recently a candidate sent us a very thoughtful email asking what kind of knowledge we’re looking for in a candidate. It set off a discussion at the office, as we wondered what direction we could provide candidates who might not yet have all of the perfect qualifications.

Below is our response to the candidate, giving guidance on what kinds of knowledge we are looking for in candidates.

None of us came into this field knowing everything. In fact, we came from quite varied backgrounds, and we all have areas we’re more or less comfortable in. So even though I’ll be giving a more comprehensive list below, it’s much more important to be excited about learning new things than having any particular expertise, and we’d be interested in talking to any candidate that knows some things in the list below and is capable and energized to tackle more.

So, after some deliberation, here are some basic resources to get you started. For our company, knowing the basics in cryptography (and cryptocurrency), functional programming, and math is probably the most important.

## Functional Programming

We’re building our protocol in OCaml, but you can learn the functional style in any functional programming language: Haskell, Scala, SML, and Elm are all great. You should be familiar with common techniques such as - modeling with types - how to read type signatures and recognize monads - algebraic data types - programming in an immutable style - higher order functions - pattern matching - recursion and so on.

If you write a medium-sized project you’ll learn a lot about these concepts, and probably become a more well-rounded programmer.

Some books we like are

## Math

For our purposes, some basic knowledge of abstract algebra is useful. Good things to understand are the definitions of groups and fields. Basic (discrete) probability theory is useful as well.

Programming for SNARKs can also be quite mathematical, and we think the skills provided by a background in mathematics are somewhat transferable to SNARK programming.

## Cryptography

Oded Goldreich’s Foundations of Cryptography is a classic, and it’ll give you a good grounding in fundamental concepts, including - hash functions - signature schemes - zero knowledge proofs.

For cryptocurrencies in particular, another great book is Bitcoin and Cryptocurrency Technologies, which is one of the best resources we’ve found on cryptocurrencies.

## Special topics

We also are working on some projects that could really use special expertise in a few areas. We of course don’t expect anyone to be an expert in all of these areas, but experience in one or two would certainly set a candidate apart.

• Programming Languages: Type systems, compilers, etc.
• Advanced Cryptography: Elliptic curve cryptography, zk-SNARKs/STARKs, formal treatment of cryptocurrency
• Security
• Systems Programming
• Distributed Systems

Again, we stress that a candidate need not have background in any of the above specialized areas, but these are the fields that will be useful to us as we build out our protocol, so if this stuff seems interesting to you, and you’d be willing to start learning a chunk of it, we’d be interested in talking. Click here to view job listings and apply.

Read more →

# Snarky: A high-level language for verifiable computation

### 2018-03-11

Living in the world today requires giving up a lot of control. We give up control of our financial lives to banks and unaccountable credit bureaus. We give up control of our most intimate data to use online services like Facebook, Amazon, and Google. We give up control of our elections to voting-system companies who run opaque and unauditable elections. We even give up some control over our understanding of the world through exposure to false or misleading news stories.

But it doesn’t have to be this way. Cryptography as a discipline provides us with some of the tools necessary to regain some of this control over our resources and data while reducing the need to trust unaccountable actors.

One such tool is verifiable computation. A verifiable computation is a computation that produces an output along with a proof certifying something about that output. Until very recently, verifiable computation has been mostly theoretical, but recent developments in zk-SNARK constructions have helped maked it practical.

Verifiable computation makes it possible for you to be confident about exactly what other people are doing with your data. For example, it enables

• Online services that are forced to be transparent about what data of yours they have and how they are using it.
• Auditable elections that protect the privacy of your vote.
• Publishing stories that provably come from a reliable source, without leaking what that source is.
• Sending money to others without yielding control of your account or your privacy.

Verifiable computation is powered by zk-SNARKs. Right now, however, progamming directly with SNARKs is comparable to writing machine code by hand, and trusting “SNARK machine code” is a lot like trusting a compiled binary without the source code.

To fulfill the promise of verifiable computing as a tool for returning control and agency to individuals, their operation has to be made as transparent as possible. We can help accomplish that goal by making the properties that verifiable computations prove specifiable in languages that are as close as possible to the informal, inuitive properties we have in our minds. That way, individuals can trust the easy-to-understand high-level specifications, rather than opaque “SNARK machine code”.

In this post, I’ll describe how we at O(1) Labs are helping to bridge this gap and solve the transparency problem with our language Snarky for specifying verifiable computation.

# Verifiable computation

As mentioned above, a verifiable computation is a computation that computes an output along with a proof certifying something about that output.

For example,

• A government could compute the winner of an election and prove that they counted all the votes correctly.
• An advertising service could compute an ad to serve to you and prove that the ad was generated without using personal data (i.e., your race, income, political views, etc.)
• A website could compute a news-feed to send to you and prove that it was generated without access to your personal data (and thus free of targeted ads, content, etc.)
• A journalist could compute a story containing a quote from a source and prove that the quote came from a reliable source (without revealing which one).

# Verifiable elections

For this post, let’s focus on the example of a verifiable election. One place where people are clamoring for accountability is of course in how pizza toppings are chosen in group settings. (Heads up: I kept this example simple for exposition, which means there are a few flaws. I take no accountability for your pizza election.)

So let’s imagine you and your friends are trying to decide on what pizza topping to get (either pepperoni or mushroom) and you’d like to vote on a topping while keeping your votes as secret as possible.

Here is a picture of pizza to keep you interested.

Let’s say everyone trusts Alice to keep votes secret. She’ll act as the “government” by collecting everyone’s votes. But everyone also knows Alice loves mushroom pizza, which means we don’t necessarily trust her to run a fair election. So we’ll develop a scheme that gives everyone assurance that the election was run correctly (i.e., that each person’s vote was included and that the majority vote was computed correctly).

Using zk-SNARKs, we can write a verifiable computation which Alice can run to compute the majority vote and prove that it was computed correctly. Moreover, using the “zk” or zero-knowledge part of zk-SNARKs, she can do so in such a way that everyone can trust the result without learning any information about individuals’ votes.

# zk-SNARKs, technically

Simplifying a bit, zk-SNARK constructions give us the following ability. Say we have a set of polynomials $$p_1, \dots, p_k$$ in variables $$x_1, \dots, x_n, y_1, \dots, y_m$$. For given $$\alpha_1, \dots, \alpha_n$$, if we know $$\beta_1, \dots, \beta_m$$ such that $p_i(\alpha_1, \dots, \alpha_n, \beta_1, \dots, \beta_m) = 0$ we can produce a piece of data $$\pi$$ which somehow certifies our knowledge of such $$\beta_i$$s which has the following two properties: 1. Zero-knowledge: $$\pi$$ does not expose any information about $$\beta_1, \dots, \beta_m$$ 2. Succinctness: $$\pi$$ is very small (concretely, a few hundred bytes) and can be checked quickly (concretely, in milliseconds). Such a set of $$\beta$$s is called a satisfying assignment.

It turns out that such constraint systems are universal in the following sense. Given any (non-deterministic) verifiable computation, we can construct a constraint system so that the existence of a satisfying assignment is equivalent to the correctness of the computation.

So, it seems that zk-SNARKs gives us exactly what we want. Namely, a means to prove correctness of computations while hiding private information and saving parties from having to redo the computation themselves.

# Back to elections

With these SNARKs in hand, let’s return to our election example. The voting scheme will be as follows:

1. Each voter $$i$$ chooses a vote $$v_i$$ and a nonce $$b_i$$. They publish a commitment $$h_i = H(b_i, v_i)$$, where $$H$$ is some collision resistant hash function. They send $$(b_i, v_i)$$ to the government.
2. The government computes the majority vote $$w$$ and publishes it along with a SNARK proving “For each $$i$$, I know $$(b_i, v_i)$$ such that $$H(b_i, v_i) = h_i$$ and $$w$$ is the majority vote of the $$v_i$$”.
3. Voters verify the SNARK on their own against the public set of commitments $$(h_1, \dots, h_n)$$.

The zero knowledge property of the SNARK ensures that no votes are revealed to anyone except the government. So, to realize this scheme in practice, all we need to do is to encode the above statement as a constraint system. Here it is:

Click for full set

Great, we’re done! Er – well, maybe not. The trouble is that it’s basically impossible for anyone to verify that this constraint system does actually enforce the above property. I could have just chosen it at random, or maliciously. In fact it doesn’t actually force the property: I deleted a bunch of constraints to make this page load faster.

The situation is similar to programming in general: one doesn’t want to have to trust a compiled binary because it is difficult to verify that it is doing what one expects one to do. Instead, we write programs in high-level languages that are easier for people to verify, and then compile them to assembly.

Here, in order for it to be convincing that a constraint system actually does what one expects it to do, one would like it to be the result of running a trusted compiler on a high-level program that is more easily seen to be equivalent to the claim one wants to prove.

# Toward a programming language for verifiable computation

We’ll now describe Snarky, our OCaml DSL for verifiable computation. It’s a high-level language for describing verifiable computations so that their correctness is more transparent. First we describe the programming model of Snarky and then explain in more depth how this model is realized.

## Requests

The basic programming model is as follows. A “verifiable computation” will be an otherwise pure computation augmented with the ability to do the following two things:

1. Pause execution to ask its environment to provide it with a value and then resume execution using that value.
2. Assert that a constraint holds among some values, terminating with an exception if the constraint does not hold.
A verifiable computation requesting a value from its environment

To get a feel for the model, let’s see our election computation rendered in a pseudocode version of Snarky.

winner (commitments):
votes =
List.map commitments (fun commitment ->
(nonce, vote) = request (Open_ballot commitment)
assert (H(nonce, vote) = commitment)
return vote)

pepperoni_count =
count votes (fun v -> v = Pepperoni)

pepperoni_wins = pepperoni_count > commitments.length / 2
return (if pepperoni_wins then Pepperoni else Mushroom)

This is intended to define a function winner that takes as input a list of commitments and returns the majority vote of a set of votes corresponding to those commitments (assuming it doesn’t terminate with an exception). It obtains the corresponding votes by mapping over the commitments and for each one

• requesting that the environment provide it with such a vote (and nonce)
• asserting that the provided vote and nonce do in fact correspond to the commitment.

If winner(commitments) is run in an environment in which it terminates without an assertion failure and outputs w, we know that there were votes corresponding to commitments such that the majority vote was w. Snarky gives us a way to prove statements like this about computations.

$$\newcommand{\tild}{\widetilde}$$ Namely, given a verifiable computation $$P$$ (i.e., a computation that makes some requests for values and assertions of constraints) Snarky lets us compile $$P$$ into a constraint system $$\tild{P}$$ such that the following two are equivalent:

1. Some environment can provide $$P$$ with values to answer each request such that $$P$$ executes without an assertion failure.
2. Some environment can produce a satisfying assignment to $$\tild{P}$$.

In our case, the requests are for openings to each of the vote commitments, and the assertions check the correctness of the openings. So, reiterating, if Alice can prove winner(cs) = w for some commitments cs and winner w, she will have proved “I know a set of votes votes corresponding to the commitments cs such that the majority vote of votes is w”.

## Snarky concretely

Let’s take a look at what the above example actually looks like in Snarky

let winner commitments =
let%bind votes =
Checked.List.mapi commitments ~f:(fun i commitment ->
let%bind nonce, vote =
request Ballot.Opened.typ (Open_ballot i)
in
let%map () =
hash_ballot (nonce, votes)
>>= Ballot.Closed.assert_equal commitment
in
vote)
in
let%bind pepperoni_count =
count votes ~f:(fun v -> Vote.(v = var Pepperoni))
in
let half = constant (Field.of_int (List.length commitments / 2)) in
let%bind pepperoni_wins = pepperoni_count > half in
Vote.(if_ pepperoni_wins ~then_:(var Pepperoni) ~else_:(var Mushroom))

There’s a bit of noise caused by the harsh realities of OCaml’s monad syntax, but overall it is quite close to our pseudocode. We

1. Map over the commitments, requesting for our environment to open them.
2. Compute the number of votes for pepperoni.
3. If the number of pepperoni votes is greater than half the votes, return pepperoni as the winner, and otherwise return mushroom.

## Handling requests

We must provide a mechanism for handling requests made by verifiable computations to pass in requested values (similar to the way we write exception handlers). In Snarky, this looks like

handle
(winner commitments)
(fun (With {request; respond}) ->
match request with
| Open_ballot i -> respond ballots.(i)
| _ -> unhandled)

where ballots : Ballot.Opened.t array is the array of opened ballots that the government has access to.

The request/handler model has a few nice features. In particular,

1. It allows one to program in a direct style by pretending one has magical access to requested values.
2. It makes a clear distinction between values that are directly computed and non-deterministically chosen values that need to be constrained (with assertions) to ensure correctness.
3. It makes testing verifiable computations simple, as one can set up “malicious” handlers that provide values other than the intended ones. The purpose here is to check that the assertions made by the computation do in fact rule out all values besides the intended ones.

# Wrapping up

Snarky helps us bridge the gap between high-level properties we want to prove using verifiable computations, and the low-level constraint systems we need to provide to SNARK constructions. It brings the promise of accountability and control over personal data through verifiable computing one step closer to practicality. The code is available on github, and we at O(1) Labs are using it in the development of our new cryptocurrency protocol that aims to power the examples described above and more.

If you find what we’re doing interesting, we’re hiring. You can find more info here. You can also sign up for our mailing list here.

Read more →