One of the empowering things I learned during my studies at university has been reading scientific papers. I've been astonished by the amount of scientific knowledge available out there, mostly in a very compressed format - a lot shorter and to the point than books. After I received my master degree from university I planned to keep reading scientific papers, one a month to be precise.
Long story short, I didn't. After starting a new job, my free time got reduced and I forgot about my plans. However, I collected a handful of very interesting papers I've read in the beginning of the year that I want to recommend. The following is more a bit of a look back, not a summary or assessment of the contents of the papers. Hopefully this can serve a bit as a public reminder for myself to keep up reading papers again.
Bitcoin: A Peer-to-Peer Electronic Cash System
This is a paper anyone interested in economics and/or computer science should read. It's the famous original paper that introduced the idea of the Bitcoin currency to the public. The author has remained anonymous to this date.
Being interested in both computer science and economics, I was curious to learn about cryptocurrencies when I had a few free months after I finished university. It's been present in newspapers, tv shows and online of course - but I never understood how it actually works. When I did some research on the topics of Bitcoin and Blockchain technologies, I was really surprised about the lack of a good beginner's explanation. Therefore I decided to give the original publication a try. It explains the motivation as well as the technology in just a few pages quite well! In my estimation, it can be easily understood without a degree in computer science. All you need to know is the definition of a linked list and a hash function - the rest is explained in the paper.
I won't discuss cryptocurrencies or what I think of Bitcoin in this post (might come later), but if you simply want to learn how the blockchain actually works, I highly recommend reading this paper.
Buffering volatility: A study on the limits of Germany's energy revolution
Buffering volatility is a paper by german economist Hans-Werner Sinn that examines the way Germany moves from fossil fuels and nuclear energy to renewable energy sources for generating electricity.
The paper starts with a summary of the status quo of about the way electricity is generated in Germany right now and the steps undertaken to move to renewable energy sources. Sinn continues by elaborating on the volatility in energy production when using wind and solar on a quite technical level. Finally he discusses several technologies and policy solutions on how to tackle this problem.
What I liked a lot about this paper was that it both tackled the problem from a technical as well as from an economic perspective. Even though the fact that energy production using renewables involves volatility is kind of obvious, I doubt most people actually understand the scale of the problem and the differences of the technologies available to handle this.
Hopefully research like this continues to be published and read by politicians for once (spoiler: german energy policies are fucking crazy). Sinn also held several presentations about his publications which are available on YouTube!
This paper describes the infamous Meltdown attack to CPUs by Intel and other chipmakers. I read this paper not so much because of personal interest, but rather because I had some free time and the Bug received incredible media coverage - so I felt a bit of an obligation to read into it.
The bug itself exploits the side effects of out-of-order execution on modern processors. It's easily understood if you have a basic understanding about how CPUs work - I guess even without that the paper describes it quite well. At university I also took some courses on related topics, so this was an easy read for me (e.g. I wrote a seminar paper about the DRAMA: Exploiting DRAM Addressing for Cross-CPU Attacks publication).
The Case for Learned Index Structures
I think I found this one on hacker news. I've had quite some courses at university about database management systems and index structures which I always enjoyed, therefore this paper looked interesting to me.
The authors basically propose to replace traditional index structures - models to find the position of a value in a long list quickly - like B-Trees, Hash-Tables and so on with neural networks. The authors try out different approaches of learned models to replace index structures for different use cases. Some of the strengths of learned models are the fixed runtime and memory consumption and the ability to capture complex high-dimensional relationships very well.