Skip to content

Commit 46b75eb

Browse files
martinrvisserrjd15372madolsonranshidPingXie
authored
Valkey 8.1 blogpost (#227)
### Description Blog on Valkey 8.1 and the new features ### Issues Resolved N/A ### Check List - [x] Commits are signed per the DCO using `--signoff` By submitting this pull request, I confirm that my contribution is made under the terms of the BSD-3-Clause License. --------- Signed-off-by: Ricardo Dias <[email protected]> Signed-off-by: martinrvisser <[email protected]> Signed-off-by: Ricardo Dias <[email protected]> Signed-off-by: Madelyn Olson <[email protected]> Co-authored-by: Ricardo Dias <[email protected]> Co-authored-by: Ricardo Dias <[email protected]> Co-authored-by: Madelyn Olson <[email protected]> Co-authored-by: Ran Shidlansik <[email protected]> Co-authored-by: Ping Xie <[email protected]> Co-authored-by: Jim Brunner <[email protected]>
1 parent 7af3ff3 commit 46b75eb

File tree

5 files changed

+230
-0
lines changed

5 files changed

+230
-0
lines changed

content/authors/mvisser.md

+9
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
---
2+
title: Martin Visser
3+
extra:
4+
photo: '/assets/media/authors/mvisser.jpeg'
5+
github: martinrvisser
6+
---
7+
8+
Martin is Percona’s tech lead for Valkey. A long term database geek from analytics, OLTP to in-memory, he is also an open source enthusiast at heart.
9+
In his spare time, Martin is a family man and enjoys learning, tinkering and building.

content/authors/rdias.md

+9
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
---
2+
title: Ricardo J. Dias
3+
extra:
4+
photo: '/assets/media/authors/rdias.jpeg'
5+
github: rjd15372
6+
---
7+
8+
Ricardo is a principal software engineer at Percona where he works as contributor to the Valkey project. Ricardo has been working in distributed storage systems for many years, but his interests are not limited to distribtued systems, he also enjoys designing and implementing lock-free data structures, as well as, developing static code analyzers. In his free time, he's a family guy and also manages a roller hockey club.
9+
+212
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,212 @@
1+
+++
2+
# `title` is how your post will be listed and what will appear at the top of the post
3+
title= "Valkey 8.1: Continuing to Deliver Enhanced Performance and Reliability"
4+
# `date` is when your post will be published.
5+
# For the most part, you can leave this as the day you _started_ the post.
6+
# The maintainers will update this value before publishing
7+
# The time is generally irrelvant in how Valkey published, so '01:01:01' is a good placeholder
8+
date= 2025-04-02 01:01:01
9+
# 'description' is what is shown as a snippet/summary in various contexts.
10+
# You can make this the first few lines of the post or (better) a hook for readers.
11+
# Aim for 2 short sentences.
12+
description= "Valkey 8.1 is now generally available! Come learn about the exciting improvements in performance, reliability, and observability that are available in this new version."
13+
# 'authors' are the folks who wrote or contributed to the post.
14+
# Each author corresponds to a biography file (more info later in this document)
15+
authors= [ "rdias", "mvisser" ]
16+
+++
17+
18+
19+
The Valkey community is excited to unveil the new release of Valkey 8.1,
20+
a minor version update designed to further enhance performance, reliability, observability and usability
21+
over Valkey 8.0 for all Valkey installations.
22+
23+
In this blog, we'll dive a bit deeper into some of the new features in Valkey 8.1 and how they can benefit your applications.
24+
25+
## Performance
26+
27+
Valkey 8.1 introduces several performance improvements that reduce latency, increase throughput, and lower memory usage.
28+
29+
30+
### The New Hashtable
31+
32+
The main changes responsible for several performance improvements is the new hashtable implementation that is used both as the main key-value store in Valkey and the implementations for the Hash, Set, and Sorted Set data types.
33+
34+
The new hashtable implementation is a complete rewrite of the previous hashtable. The new design adopts several modern design techniques to reduce the number of allocations to store each object, which reduces the number of random memory accesses while also saving memory.
35+
36+
The result is we observed a roughly 20 byte reduction per key-value pair for keys without a TTL, and up to a 30 byte reduction for key-value pairs with a TTL. The new implementation also helps improve the Valkey server throughput by roughly 10% compared to 8.0 version for pipeline workloads when I/O threading is not used.
37+
38+
You can learn more about the design and results in [the dedicated blog post about the implementation](/blog/new-hash-table).
39+
40+
### Iterator Prefetching
41+
42+
Iterating over the key set keys is done in various scenarios, for example when a Valkey node needs to send all the keys and values to a newly connected replica.
43+
44+
In Valkey 8.1 the iteration functionality is improved by using memory prefetching techniques.
45+
46+
This means that when an element is going to be returned to the caller, the bucket and its elements have already been loaded into CPU cache when the previous bucket was being iterated.
47+
48+
This makes the iterator [3.5x](https://github.com/valkey-io/valkey/pull/1568) faster than without prefetching, thus reducing the time it takes to send the data to a newly connected replica.
49+
50+
Commands like `KEYS` and also benefit from this optimization.
51+
52+
### I/O Threads Improvements
53+
54+
Following up the I/O threads improvements added in 8.0, more operations have been offloaded to the I/O thread pool in the 8.1 release, improving the throughput and latency of some operations.
55+
56+
In the new release, TLS connections are now able to offload the TLS negotiation to I/O threads. This change improves the rate of accepting new connections by around [300%](https://github.com/valkey-io/valkey/pull/1338).
57+
58+
Other sources of overhead in the TLS connection handling were identified, namely in the calls to `SSL_pending()` and `ERR_clear_error()` functions, which were being called in the main event thread. By offloading these functions to the I/O threads pool, a throughput improvement was achieved in some operations. For instance, it was observed a [10%](https://github.com/valkey-io/valkey/pull/1271) improvement in `SET` operations throughput, and a [22%](https://github.com/valkey-io/valkey/pull/1271) improvement in `GET` operations throughput.
59+
60+
Replication traffic efficiency was also improved in 8.1 by offloading the reading of replication stream on the replicas to the I/O thread pool which means they can serve more read traffic. On the primaries, replication stream writes are now offloaded to the I/O thread pool.
61+
62+
### Replication Improvements
63+
64+
Full syncs with TLS enabled are up to [18%](https://github.com/valkey-io/valkey/pull/1479) faster by removing redundant CRC checksumming when using diskless replication.
65+
66+
The fork copy-on-write memory overhead is reduced by up to [47%](https://github.com/valkey-io/valkey/pull/905)
67+
68+
### Sorted set and hyperloglog and bitcount optimizations
69+
70+
`ZRANK` command, which serves a popular usecase in operating Leaderboards, was optimized to perform up to [45%](https://github.com/valkey-io/valkey/pull/1389) faster, depending on the sorted set size. This optimization requires a C++ compiler, and is currently an opt-in feature.
71+
72+
The probabilistic hyperloglog is another great data type, used for counting unique elements in very large datasets whilst using only 12KB of memory regardless of the amount of elements. By using the modern CPUs Advanced Vector Extensions of x86, Valkey 8.1 can achieve a [12x](https://github.com/valkey-io/valkey/pull/1293) speed for the operations like `PFMERGE` and `PFCOUNT` on hyperloglog data types.
73+
74+
Similarly, the BITCOUNT operation has been improved up to [514%](https://github.com/valkey-io/valkey/pull/1741) using AVX2 on x86.
75+
76+
### Active Defrag Improvements
77+
78+
Active Defrag has been improved to eliminate latencies greater than 1ms (https://github.com/valkey-io/valkey/pull/1242). Defrag cycle time has been reduced to 500us (with increased frequency), resulting in much more predictable latencies, with a dramatic reduction in tail latencies.
79+
80+
An anti-starvation protection has also been introduced in the presence of long-running commands. If a slow command delays the defrag cycle, the defrag process will run proportionately longer to ensure that the configured CPU is achieved. Given the presence of slow commands, the proportional extra time is insignificant to latency.
81+
82+
## Observability
83+
84+
There are also several improvements to the observability of the system behavior in Valkey 8.1.
85+
86+
### Log Improvements
87+
88+
Valkey 8.1 brings new options to the format of the log file entries as well as the way timestamps are recorded in the log file. This makes it easier to consume the log files by log collecting systems.
89+
90+
The format of the log file entries is controlled by the `log-format` parameter, where the default is the existing format :
91+
92+
- `legacy`: the default, traditional log format
93+
- `logfmt`: a structured log format; see https://www.brandur.org/logfmt
94+
95+
The formatting of the timestamp of the log file entries is controlled by the `log-timestamp-format` parameter, where the default is the existing format:
96+
97+
- `legacy`: default format
98+
- `iso8601`: ISO 8601 extended date and time with time zone, of the form yyyy-mm-ddThh:mm:ss.sss±hh:mm
99+
- `milliseconds`: milliseconds since the epoch
100+
101+
*Note*: using both the `logfmt` and `iso8601` format uses around 60% more space, so disk space should be considered when implementing these.
102+
103+
### Extending the Slowlog to Commandlog
104+
105+
Valkey has long had the capability to record slow commands at execution time based on the threshold set with the `slowlog-log-slower-than` parameter to keep the last `slowlog-max-len` entries. A useful tool in troubleshooting, it didn't take into account the overall round-trip to the application or the impact on network usage. With the addition of the new `COMMANDLOG` feature in Valkey 8.1, the recording of large requests and replies is now giving users great visibility in end-to-end latency.
106+
107+
108+
### Improved Latency Insights
109+
110+
Valkey has a built-in latency monitoring framework which samples latency-sensitive code paths such as for example fork when enabled through `latency-monitor-threshold` [latency monitor](https://valkey.io/topics/latency-monitor/).
111+
112+
The new feature adds to additional metrics to the `LATENCY LATEST` command that reports on the latest latency events that have been collected. The additional information in Valkey 8.1 reports on the total of the recorded latencies as well as the number of recorded spikes for this event. These additional fields allow users to better understand how often these latency events are occurring and the total impact they are causing to the system.
113+
114+
## Extensability
115+
116+
Valkey is already well known by its extensability features. The sophisticated module system allows to extend the core system with new features developed as external modules.
117+
118+
### Programmability
119+
120+
In Valkey 8.1 the module system API was extended with the support for developing new scripting engines as external modules.
121+
122+
This new API opens the door for the development of new language and runtime alternatives to the Lua base scripts supported by the Valkey core when using `EVAL` and `FCALL` commands.
123+
124+
In future releases of Valkey, we expect the emergence of new scripting engines. A good candidate is a scripting engine based on WASM, allowing `EVAL` scripts to be written in other languages than Lua and to be executed in a more secure sandbox environment.
125+
126+
There are also benefits for existing Lua scripts, since new Lua runtimes can be easily plugged in that provide better security properties and/or better performance.
127+
128+
Developers that intend to build new scripting engines for Valkey should check the [Module API](https://valkey.io/topics/modules-api-ref/) documentation.
129+
130+
## Additional Highlights
131+
132+
### Conditional Updates
133+
134+
This new functionality allows Valkey users to perform conditional updates using the `SET` command if the given comparison-value matches the key’s current value. This is a not only a quality-of-life improvement for developers as they no longer need to add this condition to their application code, it also saves a roundtrip to first get a value and then compare it before a `SET`. When using the optional `GET` as part of the `SET IFEQ`, the existing value is returned regardless whether it matches the comparison-value.
135+
136+
137+
## Conclusion
138+
139+
Valkey 8.1 continues the path of innovation and improvements, transparantly bringing more performance and reliability to the user. We look forward to hearing what you achieve with Valkey 8.1! More detail can be found in [release notes](https://github.com/valkey-io/valkey/releases/tag/8.1.0) for the 8.1 GA release.
140+
141+
## THANK YOU
142+
143+
We appreciate the efforts of all who contributed code to this release!
144+
145+
* Alan Scherger ([flyinprogrammer](https://github.com/flyinprogrammer)),
146+
* Amit Nagler ([naglera](https://github.com/naglera)),
147+
* Basel Naamna ([xbasel](https://github.com/xbasel)),
148+
* Ben Totten ([bentotten](https://github.com/bentotten)),
149+
* Binbin ([enjoy-binbin](https://github.com/enjoy-binbin)),
150+
* Caiyi Wu ([Codebells](https://github.com/Codebells)),
151+
* Danish Mehmood ([danish-mehmood](https://github.com/danish-mehmood)),
152+
* Eran Ifrah ([eifrah-aws](https://github.com/eifrah-aws)),
153+
* Guillaume Koenig ([knggk](https://github.com/knggk)),
154+
* Harkrishn Patro ([hpatro](https://github.com/hpatro)),
155+
* Jacob Murphy ([murphyjacob4](https://github.com/murphyjacob4)),
156+
* Jim Brunner ([JimB123](https://github.com/JimB123)),
157+
* Josef Šimánek ([simi](https://github.com/simi)),
158+
* Jungwoo Song ([bluayer](https://github.com/bluayer)),
159+
* Karthick Ariyaratnam ([karthyuom](https://github.com/karthyuom)),
160+
* Karthik Subbarao ([KarthikSubbarao](https://github.com/KarthikSubbarao)),
161+
* Lipeng Zhu ([lipzhu](https://github.com/lipzhu)),
162+
* Madelyn Olson ([madolson](https://github.com/madolson)),
163+
* Masahiro Ide ([imasahiro](https://github.com/imasahiro)),
164+
* Melroy van den Berg ([melroy89](https://github.com/melroy89)),
165+
* Mikhail Koviazin ([mkmkme](https://github.com/mkmkme)),
166+
* Nadav Gigi ([NadavGigi](https://github.com/NadavGigi)),
167+
* Nadav Levanoni ([nadav-levanoni](https://github.com/nadav-levanoni)),
168+
* Nikhil Manglore ([Nikhil-Manglore](https://github.com/Nikhil-Manglore)),
169+
* Parth Patel ([parthpatel](https://github.com/parthpatel)),
170+
* Pierre ([pieturin](https://github.com/pieturin)),
171+
* Ping Xie ([PingXie](https://github.com/PingXie)),
172+
* Qu Chen ([QuChen88](https://github.com/QuChen88)),
173+
* Rain Valentine ([SoftlyRaining](https://github.com/SoftlyRaining)),
174+
* Ran Shidlansik ([ranshid](https://github.com/ranshid)),
175+
* Ray Cao ([RayaCoo](https://github.com/RayaCoo)),
176+
* Ricardo Dias ([rjd15372](https://github.com/rjd15372)),
177+
* Romain Geissler ([Romain-Geissler-1A](https://github.com/Romain-Geissler-1A)),
178+
* Roman Gershman ([romange](https://github.com/romange)),
179+
* Roshan Khatri ([roshkhatri](https://github.com/roshkhatri)),
180+
* Rueian ([rueian](https://github.com/rueian)),
181+
* Sarthak Aggarwal ([sarthakaggarwal97](https://github.com/sarthakaggarwal97)),
182+
* Seungmin Lee ([sungming2](https://github.com/sungming2)),
183+
* Shai Zarka ([zarkash-aws](https://github.com/zarkash-aws)),
184+
* Shivshankar ([Shivshankar-Reddy](https://github.com/Shivshankar-Reddy)),
185+
* Simon Baatz ([gmbnomis](https://github.com/gmbnomis)),
186+
* Sinkevich Artem ([ArtSin](https://github.com/ArtSin)),
187+
* Stav Ben-Tov ([stav-bentov](https://github.com/stav-bentov)),
188+
* Stefan Mueller ([muelstefamzn](https://github.com/muelstefamzn)),
189+
* Tal Shachar ([talxsha](https://github.com/talxsha)),
190+
* Thalia Archibald ([thaliaarchi](https://github.com/thaliaarchi)),
191+
* Uri Yagelnik ([uriyage](https://github.com/uriyage)),
192+
* Vadym Khoptynets ([poiuj](https://github.com/poiuj)),
193+
* Viktor Szépe ([szepeviktor](https://github.com/szepeviktor)),
194+
* Viktor Söderqvist ([zuiderkwast](https://github.com/zuiderkwast)),
195+
* Vu Diep ([vudiep411](https://github.com/vudiep411)),
196+
* Wen Hui ([hwware](https://github.com/hwware)),
197+
* Xuyang WANG ([Nugine](https://github.com/Nugine)),
198+
* Yanqi Lv ([lyq2333](https://github.com/lyq2333)),
199+
* Yury Fridlyand ([Yury-Fridlyand](https://github.com/Yury-Fridlyand)),
200+
* Zvi Schneider ([zvi-code](https://github.com/zvi-code)),
201+
* bodong.ybd ([yangbodong22011](https://github.com/yangbodong22011)),
202+
* [chx9](https://github.com/chx9),
203+
* [kronwerk](https://github.com/kronwerk),
204+
* otheng ([otheng03](https://github.com/otheng03)),
205+
* [secwall](https://github.com/secwall),
206+
* skyfirelee ([artikell](https://github.com/artikell)),
207+
* xingbowang ([xingbowang](https://github.com/xingbowang)),
208+
* zhaozhao.zz ([soloestoy](https://github.com/soloestoy)),
209+
* zhenwei pi([pizhenwei](https://github.com/pizhenwei)),
210+
* zixuan zhao ([azuredream](https://github.com/azuredream)),
211+
* 烈香 ([hengyoush](https://github.com/hengyoush)),
212+
* 风去幽墨 ([fengquyoumo](https://github.com/fengquyoumo))
46.7 KB
Loading
31.3 KB
Loading

0 commit comments

Comments
 (0)