Эпизоды
-
If you are bored of contemporary topics of AI and need a breather, I invite you to join me to explore a mundane, fundamental and earthy topic.
The CPU.
A reading of my substack article https://hnasr.substack.com/p/the-beauty-of-the-cpu
-
This new PostgreSQL 17 feature is game changer. They know can combine IOs when performing sequential scan.
Grab my database course
https://courses.husseinnasser.com
-
Пропущенные эпизоды?
-
No technical video today, just talking about the idea of discipline and consistency.
-
Fundamentals of Operating Systems Course
This video is an overview of how the operating system kernel does socket management and the different data structures it utilizes to achieve that.
timestamps
0:00 Intro
1:38 Socket vs Connections
7:50 SYN and Accept Queue
18:56 Socket Sharding
23:14 Receive and Send buffers
27:00 Summary
-
Polling is the ability to interrogate a backend to see if a piece of information is ready. It can introduce a chatty system and as a result long polling was born. In this video I explain the beauty of this design pattern and how we can push it to its limit. 0:00 Intro0:45 Polling2:30 Problem with Polling3:50 Long Polling8:18 Timeouts10:00 Long Polling Benefits12:00 Make requests into Long Polling17:36 Request Resumption21:40 Summary
-
You get better as a software engineer when you go through these stages.
0:00 Intro
1:15 Understand a technology
7:07 Articulate how it works
15:30 Understand its’ limitations
19:48 Try to build something better
27:45 Realize what you built also has limitations
32:48 Appreciate the original tech as is
Understand a technologyWe use technologies all the time without knowing how it works. And it is ok not knowing how things work if interests isn’t there. But when there is interest to understand how something works, pursue it. It feels good when you understand how something works because you work better with it, you swim with the tide instead of against it.
When I learned how TCP/IP work.. you would appreciate every connection request, how you read requests. You will ask questions,
what is my code doing here?
When exactly I’m creating connections?
When am I reading from the connection?
Is it safe to share connections?
Articulate how it worksThis one is not easy, you might think you understand something until you try to explain how it works. If you find yourself using jargon you probably don’t understand and you just try to impress others. Have you seen people who want to talk about something to show they understand it? It’s the opposite. Try to truly articlate how it works, you will really understand it , back to 1.
I thought I understand how backend reads requests until I tried to speak to it.
Understand the technology limitationsOnce 1,2 are done you will truly understand the tech, now you are confidant, you are excited about the tech and you will truly see when you can use the tech to its full potential and also know the weak points of the tech where it breaks, this happens a lot with TCP/IP. We know tcps limitations.
Try to build something betterThis one is optional and can be skipped, but attempting to design or building something better then the tech because you know the limitations will truly reveal how you became better. But the challenge here is the ego, you might understand the limitations but you problem is thinking that what you will build is flawless. This step must be proceed with caution.
Realize what you build also has limitationDust settles.. this step hurts, and you may take a while to realize it, but whatever you build will have flaws… and when you realize this it is when you get better as an engineer.
Appreciate the tech as isThis is when you are back full circle you are back to the first stage, look at the technology and understand it but don’t judge it.. just know the limitations and its strength and flow with it. Stop fighting and instead build around a tech, does that mean you shouldn’t build anything new, of course not. Go build, but don’t stress around making something better to defeat existing tech. But actually build it for building it.
-
Fundamentals of Operating Systems Course https://oscourse.winVery clever! We often call read/rcv system call to read requests from a connection, this copies data from kernel receive buffer to user space which has a cost. This new patch changes this to allow zero copy with notification. “Reading' data out of a socket instead becomes a “notification” mechanism, where the kernel tells userspace where the data is.”This kernel patch enables zero copy from the receive queue. https://lore.kernel.org/io-uring/ZwW7_cRr_UpbEC-X@LQ3V64L9R2/T/0:00 Intro1:30 patch summary7:00 Normal Connection Read (Kernel Copy)12:40 Zero copy Read15:30 Performance
-
Cloudflare built a global cache purge system that runs under 150 ms.
This is how they did it.
Using RockDB to maintain local CDN cache, and a peer-to-peer data center distributed system and clever engineering, they went from 1.5 second purge, down to 150 ms.
However, this isn’t full picture, because that 150 ms is just actually the P50. In this video I explore Clouldflare CDN work, how the old core-based centralized quicksilver, lazy purge work compared to the new coreless, decentralized active purge. In it I explore the pros and cons of both systems and give you my thoughts of this system.
0:00 Intro
4:25 From Core Base Lazy Purge to Coreless Active
12:50 CDN Basics
16:00 TTL Freshness
17:50 Purge
20:00 Core-Based Purge
24:00 Flexible Purges
26:36 Lazy Purge
30:00 Old Purge System Limitations
36:00 Coreless / Active Purge
39:00 LSM vs BTree
45:30 LSM Performance issues
48:00 How Active Purge Works
50:30 My thoughts about the new system
58:30 Summary
Cloudflare blog
https://blog.cloudflare.com/instant-purge/
Mentioned Videos
Cloudflare blog
https://blog.cloudflare.com/instant-purge/
Percentile Tail Latency Explained (95%, 99%) Monitor Backend performance with this metric
https://www.youtube.com/watch?v=3JdQOExKtUY
How Discord Stores Trillions of Messages | Deep Dive
https://www.youtube.com/watch?v=xynXjChKkJc
Fundamentals of Operating Systems Course
https://os.husseinnasser.com
Backend Troubleshooting Course
https://performance.husseinnasser.com
-
Fundamentals of Database Engineering udemy course https://databases.winMySQL has been having bumpy journey since 2018 with the release of the version 8.0. Critical crashes that made to the final product, significant performance regressions, and tons of stability and bugs issues. In this video I explore what happened to MySql, are these issues getting fixed? And what is the current state of MySQL at the end of 2024. 0:00 Intro 2:00 MySQL 8.0 vs 5.7 Performance11:00 Critical Crash in 8.0.38, 8.4.1 and 9.0.0 15:40 Is 8.4 better than 8.0.36?16:30 More Features = More Bugs22:30 Summary and my thoughts resources https://x.com/MarkCallaghanDB/status/1786428909376164263https://www.percona.com/blog/do-not-upgrade-to-any-version-of-mysql-after-8-0-37/http://smalldatum.blogspot.com/2024/09/mysql-innodb-vs-sysbench-on-large-server.htmlhttps://www.percona.com/blog/mysql-8-0-vs-5-7-are-the-newer-versions-more-problematic/
-
Fundamentals of Operating Systems Course https://oscourse.winIn this video I use strace a performance tool that measures how many system calls does a process makes. We compare a simple task of reading from a file, and we run the program in different runtimes, namely nodejs, buns , python and native C. We discuss the cost of kernel mode switches, system calls and pe0:00 Intro5:00 Code Explanation6:30 Python9:30 NodeJS12:30 BunJS13:12 C16:00 Summary
-
Fundamentals of Operating Systems Course https://os.husseinnasser.comWhen do you use threads?I would say in scenarios where the task is either 1) IO blocking task2) CPU heavy3) Large volume of small tasksIn any of the cases above, it is favorable to offload the task to a thread.1) IO blocking taskWhen you read from or write to disk, depending on how you do it and the kernel interface you used, the write might be blocking. This means the process that executes the IO will not be allowed to execute any more code until the write/read completes.That is why you see most logging operations are done on a secondary thread (like libuv that Node uses) this way the thread is blocked but the main process/thread can resume its work.If you can do file reads/writes asynchronously with say io_uring then you technically don't need threading. Now notice how I said file IO because it is different than socket IO which is always done asynchronously with epoll/select etc.2) CPU heavyThe second use case is when the task requires lots of CPU time, which then starves/blocks the rest of the process from doing its normal job. So offloading that task to a thread so that it runs on a different core can allow the main process to continue running on its the original core.3) Large volume of small tasksThe third use case is when you have large amount of small tasks and single process can't deliver as much throughput. An example would be accepting connections, a single process can only accept connections so fast, to increase the throughput in case where you have massive amount of clients connecting, you would spin multiple threads to accept those connections and of course read and process requests. Perhaps you would also enable port reuse so that you avoid accept mutex locking.Keep in mind threads come with challenges and problems so when it is not required.0:00 Intro1:40 What are threads?7:10 IO blocking Tasks17:30 CPU Intensive Tasks22:00 Large volume of small tasks
-
I am fascinated by how timeouts affect backend and frontend programming.
When a party is waiting on something you can place a timeout to break the wait. This is useful for freeing resources to more critical processes, detecting slow operations and even avoiding DOS attacks.
Contrary to common beliefs, timeouts are not exclusive to request processing, they can be applied to other parts of the frontend-backend communications. Let us explore this briefly.
0:00 Intro
2:30 Connection Timeout
5:00 Request Read timeout
10:00 Wait Timeout
12:00 Usage Timeout
14:00 Response Timeout
16:00 Canceling a request
19:50 Proxies and timeouts
-
Learn more about database and OS internals, check out my courses
Fundamentals of database engineering https://databases.win
Fundamentals of operating systems https://oscourse.win
This new PostgreSQL 17 feature is game changer.
You see, postgres like most databases work with fixed size pages. Pretty much everything is in this format, indexes, table data, etc. Those pages are 8K in size, each page will have the rows, or index tuples and a fixed header. The pages are just bytes in files and they are read and cached in the buffer pool.
To read page 0, for example, you would call read on offset 0 for 8192 bytes, To read page 1 that is another read system call from offset 8193 for 8192, page 7 is offset 57,345 for 8192 and so on.
If table is 100 pages stored a file, to do a full table scan, we would be making 100 system calls, each system call had an overhead (I talk about all of that in my OS course).
The enhancement in Postgres 17 is to combine I/Os you can specify how much IO to combine, so technically while possible you can scan that entire table in one system call doesn’t mean its always a good idea of course and Ill talk about that.
This also seems to included a vectorized I/O, with preadv system call which takes an array of offsets and lengths for random reads.
The challenge will become how to not read too much, say I’m doing a seq scan to find something, I read page 0 and found it and quit I don’t need to read any more pages. With this feature I might read 10 pages in one I/O and pull all its content, put in shared buffers only to find my result in the first page (essentially wasting disk bandwidth, memory etc)
It is going to be interesting to balance this out.
-
Fundamentals of Operating Systems Course https://os.husseinnasser.comWhy Windows Kernel connects slower than Linux I explore the behavior of TCP/IP stack in Windows kernel when it receives a RST from the backend server especially when the host is available but the port we are trying to connect to is not. This behavior is exacerbated by having both IPv6 and IPv4 and if the happy eye ball protocol is in place where IPv6 is favorable. 0:00 Intro0:30 Fundamentals TCP/IP3:00 Unreachable Port Behavior6:00 Client Kernel Behavior (Linux vs Windows)11:40 Slow TCP Connect on Windows15:00 localhost, IPv6 and IPv420:00 Happy Eyeballs28:00 Registry keys to change the behavior31:00 Port Unreachable vs Host Unreachablehttps://daniel.haxx.se/blog/2024/08/14/slow-tcp-connect-on-windows/
-
In this episode of the backend engineering show I describe an interesting bug I ran into where the web server ran out of ephemeral ports causing the system to halt.
0:00 Intro
0:30 System architecture
2:20 The behavior of the bug
4:00 Backend Troubleshooting
7:00 The cause
15:30 Ephemeral ports on loopback
-
Fundamentals of Operating Systems Course https://os.husseinnasser.comLinux I/O expert and subsystem maintainer Jens Axboe has submitted all of the IO_uring feature updates ahead of the imminent Linux 6.10 merge window.In this video I explore this with a focus on what zerocopy. 0:00 Intro0:30 IO_uring gets faster 2:00 What is io_uring7:00 How Normal Copying Work12:00 How Zero Copy Works13:50 ZeroCopy and TLShttps://www.phoronix.com/news/Linux-6.10-IO_uringhttps://lore.kernel.org/io-uring/[email protected]/?s=09
-
Fundamentals of Operating Systems Course https://oscourse.winLooks like fedora is compiling cpython with the -o3 flag, which does aggressive function inlining among other optimizations.This seems to improve python benchmarks performance by at most 1.16x at a cost of an extra 3MB in binary size (text segment). Although it does seem to slow down some benchmarks as well though not significantly.O1 - local register allocation, subexpression elimination O2 - Function inlining only small functionsO3 - Agressive inlining, SMID 0:00 Intro1:00 Fedora Linux gets Fast Python5:40 What is Compiling?9:00 Compiling with No Optimization12:10 Compiling with -O115:30 Compiling with -O220:00 Compiling with -O323:20 Showing NumbersBackend Troubleshooting Coursehttps://performance.husseinnasser.com
-
https://oscourse.win
Allegro improved their Kafka produce tail latency by over 80% when they switched from ext4 to xfs. What I enjoyed most about this article is the detailed analysis and tweaking the team made to ext4 before considering switching to xfs. This is a classic case of how a good tech blog looks like in my opinion.
0:00 Intro
0:30 Summary
2:35 How Kafka Works?
5:00 Producers Writes are Slow
7:10 Tracing Kafka Protocol
12:00 Tracing Kernel System Calls
16:00 Journaled File Systems
21:00 Improving ext4
26:00 Switching to XFS
Blog
https://blog.allegro.tech/2024/03/kafka-performance-analysis.html
-
Get my backend course https://backend.win
Google submitted a patch to Linux Kernel 6.8 to improve TCP performance by 40%, this is done via rearranging the tcp structures for better cpu cache lines, I explore this here.0:00 Intro0:30 Google improves Linux Kernel TCP by 40%1:40 How CPU Cache Line Works6:45 Reviewing the Google Patchhttps://www.phoronix.com/news/Linux-6.8-Networkinghttps://lore.kernel.org/netdev/[email protected]/Discovering Backend Bottlenecks: Unlocking Peak Performancehttps://performance.husseinnasser.com
-
0:00 Intro
2:00 File System Block vs Database Pages
4:00 Torn pages or partial page
7:40 How Oracle Solves torn pages
8:40 MySQL InnoDB Doublewrite buffer
10:45 Postgres Full page writes
- Показать больше