I just started a new job so I wanted a nice monitor, keyboard and mouse so I could trick myself into staying longer at the coalface (Xcode).
It has been a very long time since I bought a monitor (5+ years) and the last time that I did USB-C wasn’t widely available. All of my devices and peripherals are either USB-C or wireless at this point (bar my keyboard) so I typed in “USB-C Monitor” and one of the first results was the Dell P2720DC.
The monitor arrived in 1 day (!!!) from Dell Business and cost the princely sum of £342 (including £57 VAT which can be claimed back leaving the monitor at £285).
The base of the monitor has a plate that slots into the back of the screen with no screws. Very nice. There was a power cable, a Display Port cable, HDMI cable and a big USB-C cable. Very nice.
Once I slotted the screen into the base I powered the device on and plugged the USB-C in the back and then into my shiny new work laptop. I plugged the keyboard into the old USB slot on the side of the device and to my surprise the USB-C cable was charging my laptop, driving the display and passing through the keyboard input to the laptop. All through one cable!
I had no idea USB-C could do this but it makes me very happy. I pull the USB-C cable and put it into my non-work laptop and my keyboard and display just works without having to fiddle with a multitude of dongles or cables.
It also works on an iPad Pro! I can use my iPad just the way I would with a laptop, and the keyboard and display work flawlessly! It does output in 4:3 which is frustrating (and I don’t think there is a way to fix that) but it feels futuristic.
All in all I am very happy with this display. It’s cheap, energy efficient, comes with all the cables you need, looks lovely, allows portrait or landscape orientations and has all the ports you would need in the year our lord 2020.
There has been an explosion in new note taking applications recently, spearheaded by Roam Research – a premium note taking application created by Conor White-Sullivan.
Many of these note taking apps are based on the belief that knowledge doesn’t exist in a vacuum – there is a good chance that as you take notes there are lots of opportunities for cross pollination of ideas. When taking a note, you can reference any of your other notes and eventually build up a graph of knowledge, with links between the ideas.
The key idea driving this personal-wiki revolution is hard to invent but easy to replicate. By that I mean it took many years for the zettelkaten to reach a mass audience but now that it exists we’ve see a new app on the market copying Roams features seemingly every week.
We now have Foam, Athens, Obsidian & others pushing in on the space created by Roam, but I don’t think Roam (the company) has anything to sweat – they have a fervent following and they have done fantastically well getting productivity gurus onside to help promote the product.
The vision of a tool for all your knowledge.
I however gave up on Roam within a few weeks of use. It’s not for me. I think the idea of linking notes digitally is a massive improvement over a little box of paper based notes.
Universal search, longevity of notes, the ability to backup and the ability to lose your device and still have access to all your notes is a marvel – but I think there are some philosophical choices in the product that don’t suit me.
Roam seems to be deeply inspired by emacs and org mode – and as such, feels like a relic of an older time. Everything in Roam is treated as a block of text, and, yes that text can be manipulated, searched and referenced in other blocks of text.
But it it feels unbearably limited. Attacking org-mode on the internet is never a good idea – its fans are rabid and swear by the tool. In every online forum thread about knowledge base software you will find the diehard org-mode fan telling you about their tower of lovingly crafted note taking magnificence. And I’m not really attacking it – It’s just not for me.
It seems like we should be striving to do more in the year 2020 than replicating a paper note book or reference cards in a filing cabinet. I find time and time again all these ‘affordances’ in our digital life that map real world objects to digital counterparts.
The emacs / VSCode divide.
The File Explorer makes heavy allusions to a real world manilla folder, your mail has ‘attachments’ like you are sending pieces of paper around with paperclips on them, you have a little trash bin where deleted files go.
All these little details mapping to the real world made an enormous amount of sense when the world didn’t understand computers.
But a modern computer has the capability for so much more – new ways of doing and thinking. When you start typing the double brackets in Roam to link another note it prompts the user to ‘Search for a Page’. I can link bits of text to other bits of text. And that’s it.
I can’t do anything with that graph above. I can move the nodes around, but it doesn’t save where I move them. I can’t drop files in there, I can’t see images. The graph in Roam is supposed to be the outcome of my connected thoughts, all the knowledge I’ve collected using the tool, but it can’t represent images or PDF file for example. It feels very much like emacs in that regard – it feels limited in the modern world.
Where is my VS Code of connected knowledge? I don’t want to mentally have to convert pieces of knowledge (images, files, etc) into text counterparts (hyperlinks). I want the tool to provide value and stay out of my way. I want the computer to watch over me with loving grace. Look what happens when I open a new file type in VS Code.
What does emacs do when it encounters an unseen file type? Nothing. It stays quiet. Doesn’t offer any help. This suits some people – they love that control. Not me – I wish computers did more things like the VS Code prompt. Be smart, help me when you can, and do what I think you should do!
I want to drop a file into Roam and have it show in the graph, and I want to be able to connect absolutely everything and anything to it.
Images to PDFs, Emails to Slack Messages, And everything to my personal notes. I might be proven wrong in the future about this – but it doesn’t seem like this is in Roams wheelhouse. The primitive is text – not the graph.
A hyper graph? What’s that?
Say you are a secondary school teacher. Let’s say you teach history, and you teach the numerous year groups. Let’s say from 1st year through to pre-university you teach Romans, The Vikings, Founding of America , World War 1 and World War 2.
How would this look on todays computers? You would maybe create a folder and have each topic in it split into each folder?
Each folder would have a number of files in it, and you would share the folder with your students and they would trudge through each folder in order.
But now let’s ask a question – what are you missing here? Well, aren’t you missing links between the events in each folder? How do you discover them at a glance? Can you? History is the study of the progression of time and events don’t happen in isolation.
So events that happen in one time period cause effects in the others. The events that lead up to World War 1 are incredibly important to the conditions in Germany that lead to World War 2.
But the aforementioned folder structure almost makes it seem as if those events exist in isolation.
Where would the Treaty of Versailles sit in your folders? If you say WW1 you’re modelling your lessons in units, and valuable context is lost. Sure you could copy the file into WW2 folder to represent it’s important to both. Lame!
This is a toy example, but the deeper I think about the promise of Roam and friends the more it frustrates me that no one has built something a bit more capable. A bicycle for the mind is great, but where is my electric bike?
Imagine instead the primitive isn’t files and folders, but nodes in a graph? That’s what I thought Roam was giving me, but the graph seems like an afterthought in that product. I think this is a mistake – the graph should be the tool I use to connect my thoughts.
In mathematics, a hypergraph is a generalization of a graph in which an edge can join any number of vertices.
While graph edges are 2-element subsets of nodes, hyperedges are arbitrary sets of nodes, and can therefore contain an arbitrary number of nodes
Keeping with the history example for a second – Imagine now each vertex (node, point) in above graph shows a different ‘event’ in history and the coloured sections are ‘periods’ or ‘topics’. This is so intuitive it barely needs any more explanation.
Anyone you show this to would get it without much effort. Doesn’t it seem that this structure is absolutely perfect for managing disparate, but conditionally interconnected knowledge?
You’ll notice that the image above shows some fairly organic looking “sub-graphs”, some live by themselves, some connect to nearby sub-graphs and then some are large all encompassing structures containing many subgraphs.
These structures are defined by connections, they aren’t categories to be filled up by your notes/files. That’s a huge issue in note taking apps – you need to categorise up front.
It can be solved by things like “reviews” where you dump all your notes into some temporary inbox, then once a week you go about categorising this into subfolders.
Fine, but I find I take a bunch of notes I never look at again, and wouldn’t it be great if they could just be dumped into the abyss and if I ever need what I wrote into them again it could be surfaced by a simple keyword search?
Think of what you do for your day job. If you manage multiple projects, or try to keep a bunch of knowledge about interconnected topics together – can it be modelled using a hyper graph?
Could you collect all the jobs documents together (Email, IM Messages, Notes, Contacts), link them all together and get a clearer picture of the whole?
A hyper graph sounds exotic and nerdy – but it’s so intuitive because it model the messiness of our real data, the messiness our applications and databases but an immense effort in to paper over.
Does it exist?
I know with a frightening degree of conviction a tool like this would help me. I am a software engineer by trade and the best way I can describe the job is that there are substrates that make up the role.
I need to know about general computer science to make my code run fast, I need to know programming languages that will run on the machine I want to build for, I work in a team made of multiple people that work on multiple projects and I have ideas for improving the products I work on.
So I had a look around for something like this. I found a few dead products, and a few research projects but the closest product that I can see that does *something* like this is Palintir.
Palintir allows a user to structure unstructured data. A note, text-document, image or file is treated as a data point, and Palintir gives you a graph tool to start placing nodes together, creating links between them and analysing how all of that data fits together.
Nodes can be connected to other clusters of nodes by just drawing a link between the two. This doesn’t really mean anything under the hood as far as I can tell – it just means they have some relationship that is meaningful to the analyst. The connection itself can have metadata attached to it if required (I.e this connection means a phone call took place between these two entities), but it’s not required.
A graph doesn’t have all the documents from the data sources you pump into it, you pick and choose which ones are relevant. You can then turn that graph into something – like a powerpoint, a shareable list of documents, or even just share the graph with your colleagues.
Unfortunately Palintir don’t seem like they are ever going to sell direct to consumers, and no matter how many times I put my gmail into their contact page, I never get a call back. Seems like I’m outside their target audience – a real shame.
Hand waving and platitudes
Okay so I’m sort of joking about using Palintir for mapping personal knowledge and files. As far as I can tell a Palintir installation involves an engineer going in and installing a bunch of integrations with exotic databases – which makes sense given their clients.
Web 2.0 however has brought us an opportunity on the consumer side for building something like this.
Most applications and services you use to create content come with an API or at least a bulk-export feature to get data in or out. What if we used those API’s as the integration layer (in Palintir-speak) to build our knowledge base?
When lockdown started in the U.K I started looking for ways to spend my extended time at home. I wasn’t working because I had quit my job on Feb 16th to travel the world. Timing has never been my strong suit.
So instead of sitting around I decided to start a contracting company, got awarded a few small contracts and I released a fewapps written in SwiftUI. But I wanted a bigger project to sink my teeth into.
There were two ideas I kept coming back too;
A hosted org-mode in the browser.
Some sort of local social network.
Given that everyone was forced into their home I thought it would be the perfect time for something like the app YikYak to make a comeback.
Ghosts of apps past.
For those who never got a chance to use YikYak let me try to explain it quickly. The app touted itself as the first hyperlocal social network. A users post could only be viewed it if you were no further than 5km from the initial post location. You could vote posts up or down and comment underneath posts. It was simple but unique.
YikYak eventually implemented some very unpopular changes. Bullying, Racism and Doxxing was always a problem on YikYak. They didn’t do a huge deal to moderate the content on the app, instead leaving the community to self-police. Eventually they had to do something so they enforced email or telephone signups.
Cue mass exodus of users.
Introducing Ottr 🦦
So I began work on Ottr in early February.
Ottr was going to be YikYak for the people in lockdown. In the same way TikTok came from the ashes of Vine – Ottr was to be the refinement of the ideas behind YikYak.
A user could create a new post and comment underneath other posts. They could upvote or downvote posts. They could click the Explore button and whisk off to a new location. You could share posts to Reddit or Twitter. I got all of these features working in a web app created in React, and an iOS app. It got funding from the Amazon Activate program.
But as of today Ottr is dead. I’m going to talk about why I no longer believe in the idea, the key failures that lead to that decision and why I am releasing absolutely everything I made.
For all its flaws – I absolutely loved YikYak. It came out the first year of my bachelors degree. It was by far the most used app on campus. I would be sitting in the university library and someone would post about the weird smell coming from the desks on the 3rd floor. Or the little nod the doorman would give you on your way in.
People would post about the crazy antics of their friend on a drunken night out. About the cute doggo’s in the parks. Or how awful the food from the students union was. Maybe they would post about their crush.
Those posting on YikYak were having the same experience you were. They were seeing the same things as you, and that made you feel engaged. It was anonymous, which ended up being a double edge sword. It allowed for great humour and creativity, but sometimes the jokes went too far.
Anonymity would be YikYak’s greatest strength and its most fundamental weakness.
People got hurt using YikYak. Anonymity breeds toxicity – there are clear precedents for this on the internet – 4chan, 8chan and Reddit.
I knew that there was potential for this to happen again – but I thought I could solve it. My thought process was;
I will add a report button and over time I can just add moderators to keep on top of the reports.
Posts will be removed when they get downvoted too much.
I will redact PPI and add profanity filters.
This arrogance was mistake number 1. When I eventually let a few people use the site they broke the profanity filters with ease. Even among ~100 users people wrote spam and cluttered up the front page. Going on the site in the first week made me wince at how ugly it looked, with people filling the character limit with posts consisting entirely of emoji. It made the site look amateur.
Moderation on the internet is an unsolved problem. Handing the moderation over to automated systems will mean bias can creep in, and it’s hard (impossible?) to tune between zealotry and permissiveness. With an automated system you are going to upset some people.
Humans > Robots?
Humans are better at moderation, but they cost much more money and come with their own problems. I think the shining example of human moderation is Hacker News. Daniel Gackle and Scott Bell tackle the problems inherit of internet forums with a competence not seen elsewhere.
The New Yorker wrote this great article about how difficult that is to achieve – and that’s with Hacker News having a distinct voice. Hacker News is a place for people to post about things that are intellectually interesting to hackers.
Ottr didn’t have a clear voice – it was a place for people to post about things going on in their community. That could mean anything.
Something that is acceptable in one location, might not work for people elsewhere. You could have moderators from each community – but then you are giving that moderator a lot of power to make others unhappy.
I think Reddit is a horrible place for new posters – and part of the blame falls on moderators having too much power. Some moderators on Reddit have so much power they can control the front page, and with that, the tone of the site. I really didn’t want that.
It appears we’re at an impasse.
So if humans and robots couldn’t be trusted, what could? Ottr made the same compromise YikYak did – of letting the community police itself – and it failed (I think) because too few people used Ottr.
A self policing community has great potential to trend towards its worst elements. If someone posted something false about me on the internet, I would have great incentive to downvote it – but why would anyone else – especially when there was the potential for scandal, entertainment and outrage.
I care about how the site looks to first time users, but why would anyone else?
And so those stupid emoji posts stayed on the front page.
Is anybody alive out there?
So poor moderation options left Ottr open to the same kinds of toxicity that YikYak failed to combat. The second key issue was the fact that I don’t know how to get people to use Ottr in the first place.
I’ve never read a book on growth hacking or how to make your product sticky. I kind of understand when its happening to me (“Hand over your contacts to get some shiny coins?”, “How about you share this post on Facebook and Twitter?”, “Our site is restricting registrations at this time – Sign up with your email to be notified when we have an opening?”), but it always rubbed me the wrong way.
I wanted a few people to start using Ottr and then they would be so blown away by how good it was that they would tell their friends and share it themselves.
Build it and they will come?
I focused on UX simplicity and features and the idea of getting people to use it was always running on a low priority background thread. Instead of getting the first iteration up and focusing on what people said about that, I had a clear vision of what it was to be like, what I would add from YikYak, what I would change, and then I would hand it to the people and they would say “Yes Adam – very good, thank you!”.
Treating user growth as a secondary concern is the dumbest thing you can do for a social network. It is all that matters. Mistake #2 was thinking I was above this somehow. I treated growth-hacking and user-growth as something for other people to be concerned with. That is a juvenile approach to business. Because I can code doesn’t mean that I am somehow too pure to engage in the business of marketing or advertising the things I code.
I am working on fixing that this year and i’m trying to understand, rather than scoff, at the ideas behind growing and marketing the things I make. So far it seems much more difficult than learning to code. I didn’t understand or appreciate this until I reflected on the time I spent on Ottr.
Burn, Baby Burn.
By copying the mistakes of YikYak and ignoring the fundamentals behind making a social network work it meant that Ottr never got off the ground. You might argue that I didn’t try for very long – and you are right, I didn’t, around 4 months in total. Mistake #3 is the real reason I wanted to be done with the whole project. I was completely burnt out.
As I mentioned before, I had just quit my job in February to travel. For reasons I won’t go in to, consuming myself with Ottr directly after leaving my last job was a really bad idea.
As I couldn’t travel because of the pandemic and because I had already planned for my mini-retirement I had enough money that meant I didn’t need to jump into another job. The contract work I was getting was not very engaging – so I suddenly had a lot of free time.
So I threw myself into Ottr. I remember looking at ScreenTime stats at the time and it showing emacs being open for 80+ hrs a week. I stopped reading. Stopped watching movies. From waking up to going to sleep i’d just be working on it. Ottr went through a few iterations and I threw huge chunks of work away at points.
Then the wheels came off.
Then one fine day it all stopped being worth it. I was riding along on a wave of getting the next feature done. Talking to a partner at Amazon helping with deployment. Tinkering with CloudFlare because I thought now it’s going into production, I will obviously be inundated. But when it came time where it was actually “ready”, I became immediately repulsed of the idea of trying to grow Ottr.
Doubt crept in. “This is just a copy of YikYak”, and it was, but YikYak doesn’t exist anymore. There is nothing new under the sun.
No matter how hard I tried I couldn’t produce an ounce of enthusiasm for the project any more.
Can some good come from Ottr?
This sudden lack of enthusiasm reminded me of the book ‘The War of Art‘. I think I am so afraid of failure that I can never really finish anything. I love the process of creation. But I hate the idea that someone would judge that creation. Procrastination kicks in, I began to doubt myself and I believe I am a huge fraud.
Then one day I replied to a tweet asking what tech stack people used. I wrote “React, NodeJs & Postgres”. A few hours later someone replied;
I realised then that I possess some very valuable information. I know how to create and deploy many disparate parts of a software system.
Now, I may not know how to grow it. And I might have profound issues with self-confidence. But I do actually have something valuable I can give the world with Ottr.
It’s a ready made tech stack. The code could be used to teach people all the moving parts of a modern application. They could use this as a learning experience for at least one way to get started creating their own.
Even in failure, there is success.
So today I have made the Ottr repositories public. You can download each part of the system from the database, the backend, the web client and the app and you can get them all up and running on your computer. I want Ottr to be a place where people can go to answer;
How do I get text from a react component onto someone else’s screen?
How do I connect to a database?
Can I get one of those little green locks on my webpage?
Is it possible to have real time updates when someone posts a new comment?
Where can I put secrets?
How should I go about structuring my code?
Now it will take time before all of this is in its simplest form inside these repositories. Ottr lacks tests and there are some hacks and crappy code here and there – but that doesn’t make it valueless.
A happy ending?
Today the code is open for people to tinker with, but continued value will come from a series of blog posts I am writing explaining high level concepts of software such as those detailed above, using Ottr as a base for examples.
There are so many questions a beginner has about modern software – and Ottr, even in its current form, can answer a lot of them. I want it to be a flat pack startup.
Now that the source is open – you can help make it better. Pull requests very much welcome!
Here lies the final(?) resting place of Ottr: 2020 – 2020.
Please let me know if anything breaks – or if you would like any part explained in more detail.
ottr-deploy – Instructions for how to run the Ottr tech stack.
In my day-to-day as a developer I make use of a lot of different tools. Emacs for writing code. mbsync/isync for pulling down new emails. Vim for quickly editing files on remote hosts. Tmux for ssh’ing into servers. I like to configure this software to make it feel exactly right to me.
For example, my emacs configuration file is about 300 lines long and contains things like custom themes and fonts, making the UI as simple as possible and a bunch of little lisp snippets to help me automate repetitive tasks.
Over the course of their career a software engineer will need to setup new computers. If they don’t have a way of porting over their old configuration files to the new computer they will be wasting time every time they need to do this. So they think: “I know! I will just store my configuration files somewhere and sync it between each computer!”
Not a bad idea at all. There are a few ways to do this. Most end up using Dropbox or GitHub. My initial attempt at solving this was to create a folder containing all my files and then create symlinks in my home directory.
However, I quickly found that this was a pain in the ass. If you do a Google search for “why are symlinks bad” you will likely end up this cat-v article which sums up the issues with symlinks.
The problem with symlinks.
My main problem with them is that they break some foundational assumptions about the file system. If you open a symlink should you be transported to the directory containing the linked file?
What about double dot in the terminal – in the context of an open symlink does the double dot mean up a directory from the containing folder, or up a directory from the folder containing the symlinked file?
TL;DR: Symlinks are a kludge and in some cases they break simple file hierarchy semantics.
So how do we fix it?
Grin and bare it.
Git has around 1.5 million individual commands. One of those commands is –bare when initialising a new git repository.
By initialising a git bare repo in our home directory we can still track individual files, we can push them to a remote, and we can do it without having to resort to using symlinks.
So how does it work? Here are the few simple commands to get it set up.
git init --bare $HOME/.cfg
alias config='/usr/bin/git --git-dir=$HOME/.cfg/ --work-tree=$HOME'
config config status.showUntrackedFiles no
So now we are ready to add files to our repo. Adding that alias to your bash or zsh profile will be helpful for the future.
Adding some some files.
So now we can use start adding all of our configuration files.
config add .emacs
config commit -m "Add .emacs config"
config add .config/nvim/init.vim
config commit -m "Add vim config"