My failed start-up

Why it failed and why i’m releasing all the source code for free.

This is a long post.

If you are just looking for the code click here

When lockdown started in the U.K I started looking for ways to spend my extended time at home. I wasn’t working because I had quit my job on Feb 16th to travel the world. Timing has never been my strong suit.

So instead of sitting around I decided to start a contracting company, got awarded a few small contracts and I released a few apps written in SwiftUI. But I wanted a bigger project to sink my teeth into.

There were two ideas I kept coming back too;

  1. A hosted org-mode in the browser.
  2. Some sort of local social network.

Given that everyone was forced into their home I thought it would be the perfect time for something like the app YikYak to make a comeback.

Ghosts of apps past.

For those who never got a chance to use YikYak let me try to explain it quickly. The app touted itself as the first hyperlocal social network. A users post could only be viewed it if you were no further than 5km from the initial post location. You could vote posts up or down and comment underneath posts. It was simple but unique.

YikYak eventually implemented some very unpopular changes. Bullying, Racism and Doxxing was always a problem on YikYak. They didn’t do a huge deal to moderate the content on the app, instead leaving the community to self-police. Eventually they had to do something so they enforced email or telephone signups.

Cue mass exodus of users.

Introducing Ottr ūü¶¶

So I began work on Ottr in early February.

Ottr was going to be YikYak for the people in lockdown. In the same way TikTok came from the ashes of Vine – Ottr was to be the refinement of the ideas behind YikYak.

  • Ottr's home page
  • A new post
  • Comments under a post
  • Explore mode
  • Explore Mode

A user could create a new post and comment underneath other posts. They could upvote or downvote posts. They could click the Explore button and whisk off to a new location. You could share posts to Reddit or Twitter. I got all of these features working in a web app created in React, and an iOS app. It got funding from the Amazon Activate program.

Components of ottr - my failed startup
The components of Ottr

But as of today Ottr is dead. I’m going to talk about why I no longer believe in the idea, the key failures that lead to that decision and why I am releasing absolutely everything I made.

On anonymity.

For all its flaws – I absolutely loved YikYak. It came out the first year of my bachelors degree. It was by far the most used app on campus. I would be sitting in the university library and someone would post about the weird smell coming from the desks on the 3rd floor. Or the little nod the doorman would give you on your way in.

People would post about the crazy antics of their friend on a drunken night out. About the cute doggo’s in the parks. Or how awful the food from the students union was. Maybe they would post about their crush.

Those posting on YikYak were having the same experience you were. They were seeing the same things as you, and that made you feel engaged. It was anonymous, which ended up being a double edge sword. It allowed for great humour and creativity, but sometimes the jokes went too far.

Anonymity would be YikYak’s greatest strength and its most fundamental weakness.

And yet…

People got hurt using YikYak. Anonymity breeds toxicity – there are clear precedents for this on the internet – 4chan, 8chan and Reddit.

I knew that there was potential for this to happen again – but I thought I could solve it. My thought process was;

  • I will add a report button and over time I can just add moderators to keep on top of the reports.
  • Posts will be removed when they get downvoted too much.
  • I will redact PPI and add profanity filters.

This arrogance was mistake number 1. When I eventually let a few people use the site they broke the profanity filters with ease. Even among ~100 users people wrote spam and cluttered up the front page. Going on the site in the first week made me wince at how ugly it looked, with people filling the character limit with posts consisting entirely of emoji. It made the site look amateur.

Moderation on the internet is an unsolved problem. Handing the moderation over to automated systems will mean bias can creep in, and it’s hard (impossible?) to tune between zealotry and permissiveness. With an automated system you are going to upset some people.

Humans > Robots?

Humans are better at moderation, but they cost much more money and come with their own problems. I think the shining example of human moderation is Hacker News. Daniel Gackle and Scott Bell tackle the problems inherit of internet forums with a competence not seen elsewhere.

The New Yorker wrote this great article about how difficult that is to achieve – and that’s with Hacker News having a distinct voice. Hacker News is a place for people to post about things that are intellectually interesting to hackers.

Ottr didn’t have a clear voice – it was a place for people to post about things going on in their community. That could mean anything.

Something that is acceptable in one location, might not work for people elsewhere. You could have moderators from each community – but then you are giving that moderator a lot of power to make others unhappy.

I think Reddit is a horrible place for new posters – and part of the blame falls on moderators having too much power. Some moderators on Reddit have so much power they can control the front page, and with that, the tone of the site. I really didn’t want that.

It appears we’re at an impasse.

So if humans and robots couldn’t be trusted, what could? Ottr made the same compromise YikYak did – of letting the community police itself – and it failed (I think) because too few people used Ottr.

A self policing community has great potential to trend towards its worst elements. If someone posted something false about me on the internet, I would have great incentive to downvote it – but why would anyone else – especially when there was the potential for scandal, entertainment and outrage.

I care about how the site looks to first time users, but why would anyone else?

And so those stupid emoji posts stayed on the front page.

Is anybody alive out there?

So poor moderation options left Ottr open to the same kinds of toxicity that YikYak failed to combat. The second key issue was the fact that I don’t know how to get people to use Ottr in the first place.

I’ve never read a book on growth hacking or how to make your product sticky. I kind of understand when its happening to me (“Hand over your contacts to get some shiny coins?”, “How about you share this post on Facebook and Twitter?”, “Our site is restricting registrations at this time – Sign up with your email to be notified when we have an opening?”), but it always rubbed me the wrong way.

I wanted a few people to start using Ottr and then they would be so blown away by how good it was that they would tell their friends and share it themselves.

Build it and they will come?

I focused on UX simplicity and features and the idea of getting people to use it was always running on a low priority background thread. Instead of getting the first iteration up and focusing on what people said about that, I had a clear vision of what it was to be like, what I would add from YikYak, what I would change, and then I would hand it to the people and they would say “Yes Adam – very good, thank you!”.

Treating user growth as a secondary concern is the dumbest thing you can do for a social network. It is all that matters. Mistake #2 was thinking I was above this somehow. I treated growth-hacking and user-growth as something for other people to be concerned with. That is a juvenile approach to business. Because I can code doesn’t mean that I am somehow too pure to engage in the business of marketing or advertising the things I code.

I am working on fixing that this year and i’m trying to understand, rather than scoff, at the ideas behind growing and marketing the things I make. So far it seems much more difficult than learning to code. I didn’t understand or appreciate this until I reflected on the time I spent on Ottr.

Burn, Baby Burn.

By copying the mistakes of YikYak and ignoring the fundamentals behind making a social network work it meant that Ottr never got off the ground. You might argue that I didn’t try for very long – and you are right, I didn’t, around 4 months in total. Mistake #3 is the real reason I wanted to be done with the whole project. I was completely burnt out.

As I mentioned before, I had just quit my job in February to travel. For reasons I won’t go in to, consuming myself with Ottr directly after leaving my last job was a really bad idea.

As I couldn’t travel because of the pandemic and because I had already planned for my mini-retirement I had enough money that meant I didn’t need to jump into another job. The contract work I was getting was not very engaging – so I suddenly had a lot of free time.

So I threw myself into Ottr. I remember looking at ScreenTime stats at the time and it showing emacs being open for 80+ hrs a week. I stopped reading. Stopped watching movies. From waking up to going to sleep i’d just be working on it. Ottr went through a few iterations and I threw huge chunks of work away at points.

Then the wheels came off.

Then one fine day it all stopped being worth it. I was riding along on a wave of getting the next feature done. Talking to a partner at Amazon helping with deployment. Tinkering with CloudFlare because I thought now it’s going into production, I will obviously be inundated. But when it came time where it was actually “ready”, I became immediately repulsed of the idea of trying to grow Ottr.

Doubt crept in. “This is just a copy of YikYak”, and it was, but YikYak doesn’t exist anymore. There is nothing new under the sun.

No matter how hard I tried I couldn’t produce an ounce of enthusiasm for the project any more.

Can some good come from Ottr?

This sudden lack of enthusiasm reminded me of the book ‘The War of Art‘. I think I am so afraid of failure that I can never really finish anything. I love the process of creation. But I hate the idea that someone would judge that creation. Procrastination kicks in, I began to doubt myself and I believe I am a huge fraud.

Then one day I replied to a tweet asking what tech stack people used. I wrote “React, NodeJs & Postgres”. A few hours later someone replied;

A tweet from a new coder. Even though my startup failed, people could learn from it.
A tweet from a new coder.

I realised then that I possess some very valuable information. I know how to create and deploy many disparate parts of a software system.

Now, I may not know how to grow it. And I might have profound issues with self-confidence. But I do actually have something valuable I can give the world with Ottr.

It’s a ready made tech stack. The code could be used to teach people all the moving parts of a modern application. They could use this as a learning experience for at least one way to get started creating their own.

Even in failure, there is success.

So today I have made the Ottr repositories public. You can download each part of the system from the database, the backend, the web client and the app and you can get them all up and running on your computer. I want Ottr to be a place where people can go to answer;

  • How do I get text from a react component onto someone else’s screen?
  • How do I connect to a database?
  • Can I get one of those little green locks on my webpage?
  • Is it possible to have real time updates when someone posts a new comment?
  • Where can I put secrets?
  • How should I go about structuring my code?

Now it will take time before all of this is in its simplest form inside these repositories. Ottr lacks tests and there are some hacks and crappy code here and there – but that doesn’t make it valueless.

A happy ending?

Today the code is open for people to tinker with, but continued value will come from a series of blog posts I am writing explaining high level concepts of software such as those detailed above, using Ottr as a base for examples.

There are so many questions a beginner has about modern software – and Ottr, even in its current form, can answer a lot of them. I want it to be a flat pack startup.

Now that the source is open – you can help make it better. Pull requests very much welcome!

RIP Ottr

Here lies the final(?) resting place of Ottr: 2020 – 2020.

Please let me know if anything breaks – or if you would like any part explained in more detail.


Dotfiles without Symlinks

In my day-to-day as a developer I make use of a lot of different tools. Emacs for writing code. mbsync/isync for pulling down new emails. Vim for quickly editing files on remote hosts. Tmux for ssh’ing into servers. I like to configure this software to make it feel exactly right to me.

For example, my emacs configuration file is about 300 lines long and contains things like custom themes and fonts, making the UI as simple as possible and a bunch of little lisp snippets to help me automate repetitive tasks.

Over the course of their career a software engineer will need to setup new computers. If they don’t have a way of porting over their old configuration files to the new computer they will be wasting time every time they need to do this. So they think: “I know! I will just store my configuration files somewhere and sync it between each computer!”

Not a bad idea at all. There are a few ways to do this. Most end up using Dropbox or GitHub. My initial attempt at solving this was to create a folder containing all my files and then create symlinks in my home directory.

However, I quickly found that this was a pain in the ass. If you do a Google search for “why are symlinks bad” you will likely end up this cat-v article which sums up the issues with symlinks.

The problem with symlinks.

My main problem with them is that they break some foundational assumptions about the file system. If you open a symlink should you be transported to the directory containing the linked file?

What about double dot in the terminal – in the context of an open symlink does the double dot mean up a directory from the containing folder, or up a directory from the folder containing the symlinked file?

TL;DR: Symlinks are a kludge and in some cases they break simple file hierarchy semantics.

So how do we fix it?

Grin and bare it.

Git has around 1.5 million individual commands. One of those commands is –bare when initialising a new git repository.

By initialising a git bare repo in our home directory we can still track individual files, we can push them to a remote, and we can do it without having to resort to using symlinks.

So how does it work? Here are the few simple commands to get it set up.

cd ~;
git init --bare $HOME/.cfg

alias config='/usr/bin/git --git-dir=$HOME/.cfg/ --work-tree=$HOME'

config config status.showUntrackedFiles no

So now we are ready to add files to our repo. Adding that alias to your bash or zsh profile will be helpful for the future.

Adding some some files.

So now we can use start adding all of our configuration files.

config status
config add .emacs
config commit -m "Add .emacs config"
config add .config/nvim/init.vim
config commit -m "Add vim config"

Pushing to a remote.

config add remote origin <REMOTE_GITHUB_URL>
config push

If everything went well you should now see your changes in Github. Well done – we’re all sorted for now on this computer!


And to go full circle lets see how we can then clone the dotfiles to a new computer.

git clone --separate-git-dir=~/.cfg ~

docker postgres

Recreating YikYak with Postgres.

YikYak was an anonymous social network that used your location to show you posts 5km around you. Users of the app could create new posts and the people around them could view the posts and vote up or down.

YikYak filed a few patents for the tech that helped them achieve this. The patents mention segmenting users into buckets by their physical location. One modern tool we have to recreate this type of user segmentation is a data-structure called an R-Tree.

An example on an R-Tree in action

R-trees are tree data structures used for spatial access methods, i.e., for indexing multi-dimensional information such as geographical coordinates, rectangles or polygons.

Luckily the Postgres database enables us to make use of this data-structure via geospatial extensions. In this post I am going to;

  1. Show how we can enable those extensions.
  2. Seed a few posts into our database.
  3. Find the posts in a small around a specific latitude and longitude using a SQL query.

Let’s get started!

Creating tables.

Firstly you will need an instance of Postgres. It is easy to set up in Docker (I’ve detailed a post here showing how).

I am going to be using DBeaver for this tutorial but you could use psql or any other Postgres connector. Let’s creating a new table for our posts.

Select the SQL Editor
Chose whatever Database you want. I am going with Postgres
Name your script

Ready to go – So below we have a simple example of table for storing new posts. I am using a split latitude and longitude to show how the extensions work, but you could also combine the two into a POINT datatype if you are planning to use a lot of columns.

	post_content text NOT NULL,
	latitude float8 NOT NULL,
	longitude float8 NOT NULL

On executing that you should have a table you can start insert values into.

Inserting posts.

So let’s start out by inserting two posts, the first posted from 10 Downing Street, and the second from Buckingham Palace.

	'I absolutely love the Queen. I hope she thinks I am doing a good job.',
	'The new Prime Minister is a prat! I do hope he doesnt come over often',

Now let’s put another post in from an aspiring politics student who is located in Cambridge University (65 miles away). Now we have an outlier that won’t show up once we do location bound queries later in this tutorial.

	'Day one of my politics degree. Shall be most fun to stalk the halls of Westminister in 4 years.',

Installing Postgres extensions

We would like to be able to stand in St. James park (a large park between 10 Downing Street and Buckingham Palace) and see the two posts close by, but not the one from Cambridge.

So how do we do that? Through extensions! Postgres enables users to incrementally add features that help us do new things with our data.

Once they are installed we can use the latitude and longitude of 51.5032, -0.1349 to create a new select query on our posts table.

You can install extensions in Postgres simply by running a query. The two extensions we need are cube and earthdistance.


After executing those two queries, you should see them under the ‘Extensions’ tab in DBeaver.

Finding nearby posts.

We can now use these built in functions from those extensions to show us the two nearby posts.

	earth_box(ll_to_earth(51.5032,-0.1349), 50000) 
	@> ll_to_earth(latitude, longitude);

The earth_box function takes two parameters, a point (which is returned by the ll_to_earth function) and a value for the size of the bounding box we want which is in metres.

By using the contains? operator (@>) we are saying we only want values in the table in the bounding box generated by the earth_box function.

When executing that query we will see the two posts we were expecting! Try increasing the bounding box range out and you will be able to see the Cambridge post.

So now we have a working example of how to recreate the YikYak location-based functionality.


Okay so why did we need those extensions? Can’t we just take the world, split it into squares and determine which box a latitude and longitude falls into?

Thats what we would like to do – but there are complications caused by the fact that the world is a sphere. To find posts “in your area” you are querying to find straight line distances between two points, your lat-long and for each row in the database. In a sphere there are no straight lines.

There is a way to determine the distance between two points known as the Great-Circle distance. Instead of using straight lines we use circles or curves known as geodesics. Through any two points on a sphere that are not directly opposite each other, there is a unique great circle.

The earthdistance extension allows us to generate queries using the contains? operator from the cube extension to generate efficient distance lookups between points.


One thing to note is that this query will do a sequential scan of the entire table, which can be slow once you get up to thousands of posts.

If you do decide to use this setup in your application you should create an index on the latitude, longitude to dramatically speed up queries. That would look like this.

CREATE INDEX loc_index ON post USING gist (ll_to_earth(latitude, longitude));

Postgres will then determine whether it needs to use this index to speed up queries. You can check if the index is being used by using a tool to view the execution plan when you run the query detailed above. If it says SEQ_SCAN it is not using the index.

And we’re done! If you’ve noticed any mistakes or improvements I can make please drop me an email at


Postgres on Docker

Don’t care about why you would want to run Postgres in Docker? Skip to the commands.

Applications and Websites are a little bit useless without data.

So we intrepid developers like to hold that data somewhere to be used later. There are many forms of storage that we can use, such as cookies, local storage in the browser, a SQLite database for our iOS and Android apps or a database that we can pull data out of using an API.

When working on a project that uses a database, it can be painful to keep every ones environments in sync – if Alice is using Postgres version 4 and Bob is using version 6 there could be a time where Bob writes some procedure that runs fine on his database but when Alice tries to run it – it blows up!

This is where Docker comes in handy, you can ship a Dockerfile in your projects source code, put it up on Github and tell all the developers working on your project to run that Dockerfile and you can rest assured that your code will continue to work.

In this tutorial I am going to explain how to install & use Postgres, a popular relational database.

Lets get started!

Installing Docker

First we need to setup Docker if you haven’t already done so. Once that is done, make sure Docker is running by opening your terminal and typing.

docker --version 

If you get an error along the lines of;

Cannot connect to the Docker daemon. Is the docker daemon running on this host?

You need to run Docker CE. How to do that changes between operating systems click here for a thread on how to troubleshoot that.

If everything went well you should see the current version number and are ready to go on to the next steps.

Creating a persistent volume

We want to create a folder on our computer so when we restart the Docker container all of our data will still be there. If we didn’t take this step, each time we start and stop our container we would be starting from scratch!

Open your terminal and type the following to create that volume.

mkdir -p $HOME/docker/volumes/postgres

Pulling Postgres Image from DockerHub

Here we are going to see the beauty of Docker. So the Postgres team have already created a preset image that developers who want to use Postgres can download and use. It contains all the code packaged in a way that can be shared and used with confidence by all the developers in your team. Okay, lets hop back over to your terminal and run;

docker pull postgres

Running a container using the Postgres Image

Now we want to create an instance of a database (i.e we want to create a container using the Postgres image we just pulled as a base) and we need to pass it some parameters for it to work the way we want it to.

docker run \
    --rm --name pg-docker \
    -e POSTGRES_PASSWORD=docker \
    -d -p 5432:5432 \
    -v $HOME/docker/volumes/postgres:/var/lib/postgresql/data \

So lets unpack this a little;

  • docker run -rm –name pg-docker
    • Run an container (removing it if it already exists) called pg-docker.
    • Set the environment variable POSTGRES_PASSWORD to ‘docker’
  • -d -p 5432:5432
    • -d = Run in detached mode (i.e return control to the terminal after running.)
    • -p Map local port 5432 to Container port 5432 (the default Postgres Port)
  • -v $HOME/docker/volumes/postgres:/var/lib/postgresql/data
    • Map the local volume we created earlier to the internal folder on the container named /var/lib/postgresql/data
  • postgres
    • Use the image named ‘postgres’ which we pulled earlier


You should now be running Postgres on Docker!

You can verify by running

docker ps | grep pg-docker

Which should show you the following;

Postgres running in Docker!

Now you can use psql to connect to the running instance, or connect to it from your applications, or view the database using a GUI tool like DBeaver!

Fancy making something useful now you’ve got Postgres set up? Click here to see how to use Postgres to make a location-based social network.


docker pull postgres;
mkdir -p $HOME/docker/volumes/postgres;
docker run \
    --rm --name pg-docker \
    -e POSTGRES_PASSWORD=docker \
    -d -p 5432:5432 \
    -v $HOME/docker/volumes/postgres:/var/lib/postgresql/data \
psql -h localhost -U postgres -d postgres