This content originally appeared on Level Up Coding - Medium and was authored by Adam Szpilewicz
Implement your own thread safe cache in go without external dependencies
Caching is a critical aspect of optimizing the performance of modern applications. It allows you to store and quickly retrieve the results of expensive operations or frequently accessed data, reducing the need to recompute or fetch the data repeatedly. In this article, we will explore how to implement a thread-safe cache in Go using the sync.Map package. This cache implementation supports expiration of cache entries, ensuring that stale data does not linger in the cache.
Why to bother anyway
Before we start with the own thread safe memory cache let’s consider pros and cons. As the alternative is to use external libraries (tools) that were invented for caching, have long history of usage and support let’s think about advanatges and disadvantages.
Implementing your own thread-safe cache using Go’s sync.Map can have several benefits over using external libraries like Redis, depending on your use case and requirements. Here are some reasons why creating your own cache with sync.Map might be advantageous:
- Lower latency: When using an in-memory cache like the one implemented with sync.Map, the data is stored within your application's memory. This can result in lower access latency compared to a separate service like Redis, which requires network communication between your application and the caching service.
- Simpler deployment: With a sync.Map-based cache, there's no need to deploy, configure, and maintain an additional service like Redis. Your caching solution is part of your application, making the deployment process simpler and potentially reducing operational complexity.
- Reduced resource usage: An in-memory cache with sync.Map typically consumes fewer resources than an external service like Redis, which can save on memory and CPU usage. This might be more cost-effective, especially for smaller-scale applications or those with tight resource constraints.
- Easier integration: Implementing a cache using sync.Map directly in your Go application can make it easier to integrate with your existing codebase. You don't need to learn a new API or manage connections to an external service.
- Customization: When creating your own cache implementation, you have full control over its behavior and features. You can easily tailor the cache to your specific needs, optimize it for your use case, and add custom expiration policies or other functionality as needed.
- Fun: Creating your own piece of code that implements cache brings lots of fun and helps to understand better external libraries that provide cache functionality. And understanding them better helps to utilize better all functionalites they provide.
However, it’s important to note that using an external caching solution like Redis has its own set of advantages, particularly for larger-scale applications or those with more complex caching requirements. Some benefits of using Redis include:
- Scalability: Redis is designed for high-performance and can scale horizontally to handle a large number of requests and data sizes.
- Persistence: Redis supports different levels of data persistence, ensuring that your cache data survives restarts or crashes.
- Advanced features: Redis offers a wide range of features beyond simple key-value caching, such as data structures, pub/sub messaging, and more.
Ultimately, the choice between implementing your own cache with sync.Map or using an external library like Redis will depend on your specific needs, the scale of your application, and the trade-offs you're willing to make in terms of performance, complexity, and resources.
Moreover, implementing your cache brings fun and help better understand more sophisticated products like Redis. Therefore, we will implement the one in this post.
Why we use sync.Map
Simply, because it fits our needs perfectly. The deeper explanation - sync.Map is a concurrent, thread-safe map implementation in the Go standard library. It is designed to be used in cases where the map is accessed by multiple goroutines concurrently, and the number of keys is unknown or changes over time.
It’s important to note that while sync.Map is a great choice for specific use-cases, it is not meant to replace the built-in map type for all scenarios. In particular, sync.Map is best suited for cases where:
- The map is primarily read-heavy, with occasional writes.
- The number of keys changes over time or is not known in advance.
- The map is accessed concurrently by multiple goroutines.
In cases where the number of keys is fixed or known in advance, and the map can be pre-allocated, the built-in map type with appropriate synchronization using sync.Mutex or sync.RWMutex might provide better performance.
Creating a SafeCache
Our SafeCache as mentioned above is a simple, thread-safe cache that uses Go's sync.Map to store its key-value pairs.
Firstly, we define a CacheEntry struct to hold the value and its expiration timestamp:
type CacheEntry struct {
value interface{}
expiration int64
}
The SafeCache struct embeds a sync.Map, which provides concurrency-safe access to the key-value pairs:
type SafeCache struct {
syncMap sync.Map
}
Adding Values to the Cache
Then we define Set method allows us to store a value in the cache with a specified Time To Live (TTL). The TTL determines how long the cache entry should be considered valid. Once the TTL expires, the cache entry is removed during the next clean-up cycle:
func (sc *SafeCache) Set(key string, value interface{}, ttl time.Duration) {
expiration := time.Now().Add(ttl).UnixNano()
sc.syncMap.Store(key, CacheEntry{value: value, expiration: expiration})
}
Retrieving Values from the Cache
The next needed method is Get that retrieves a value from the cache using its key. If the value is not found or has expired, the method returns false:
func (sc *SafeCache) Get(key string) (interface{}, bool) {
// ... (see the provided code for the full implementation)
}
What is important in Get method is type assertion after loading value from cache. We rely on sync.Map Load method that return interface.
entry, found := sc.syncMap.Load(key)
if !found {
return nil, false
}
// Type assertion to CacheEntry, as entry is an interface{}
cacheEntry := entry.(CacheEntry)
Removing Values from the Cache
And of cource we need Delete method that allows us to remove a value from the cache:
func (sc *SafeCache) Delete(key string) {
sc.syncMap.Delete(key)
}
Cleaning Up Expired Entries
We extend the cache by CleanUp method that is responsible for periodically removing expired entries from the cache. It uses the Range method provided by sync.Map to iterate through all key-value pairs in the cache and remove those with expired TTL:
func (sc *SafeCache) CleanUp() {
// ... (see the provided code for the full implementation)
}
To run the CleanUp method, we can start a separate Goroutine when initializing the cache:
cache := &SafeCache{}
go cache.CleanUp()
And the whole code snippet
package cache
import (
"sync"
"time"
)
// CacheEntry is a value stored in the cache.
type CacheEntry struct {
value interface{}
expiration int64
}
// SafeCache is a thread-safe cache.
type SafeCache struct {
syncMap sync.Map
}
// Set stores a value in the cache with a given TTL
// (time to live) in seconds.
func (sc *SafeCache) Set(key string, value interface{}, ttl time.Duration) {
expiration := time.Now().Add(ttl).UnixNano()
sc.syncMap.Store(key, CacheEntry{value: value, expiration: expiration})
}
// Get retrieves a value from the cache. If the value is not found
// or has expired, it returns false.
func (sc *SafeCache) Get(key string) (interface{}, bool) {
entry, found := sc.syncMap.Load(key)
if !found {
return nil, false
}
// Type assertion to CacheEntry, as entry is an interface{}
cacheEntry := entry.(CacheEntry)
if time.Now().UnixNano() > cacheEntry.expiration {
sc.syncMap.Delete(key)
return nil, false
}
return cacheEntry.value, true
}
// Delete removes a value from the cache.
func (sc *SafeCache) Delete(key string) {
sc.syncMap.Delete(key)
}
// CleanUp periodically removes expired entries from the cache.
func (sc *SafeCache) CleanUp() {
for {
time.Sleep(1 * time.Minute)
sc.syncMap.Range(func(key, entry interface{}) bool {
cacheEntry := entry.(CacheEntry)
if time.Now().UnixNano() > cacheEntry.expiration {
sc.syncMap.Delete(key)
}
return true
})
}
}
Finally, you can run below main.go program to check the cache works. We create an HTTP server that listens for requests at the “/compute” endpoint. The server accepts an integer n as a query parameter and returns the result of an expensive computation (in this case, a simple square operation with a simulated delay). The server first checks the cache to see if the result for the given input is already cached; if not, it calculates the result, stores it in the cache, and returns it to the client.
To test the server, run the code and make a request to http://localhost:8080/compute?n=5. The first request will take longer (due to the simulated delay), but subsequent requests with the same n will return the cached result instantly.
package main
import (
"fmt"
"log"
"net/http"
"safe-cache/cache"
"strconv"
"time"
)
func expensiveComputation(n int) int {
// Simulate an expensive computation
time.Sleep(2 * time.Second)
return n * n
}
func main() {
safeCache := &cache.SafeCache{}
// Start a goroutine to periodically clean up the cache
go safeCache.CleanUp()
http.HandleFunc("/compute", func(w http.ResponseWriter, r *http.Request) {
query := r.URL.Query()
n, err := strconv.Atoi(query.Get("n"))
if err != nil {
http.Error(w, "Invalid input", http.StatusBadRequest)
return
}
cacheKey := fmt.Sprintf("result_%d", n)
cachedResult, found := safeCache.Get(cacheKey)
var result int
if found {
result = cachedResult.(int)
} else {
result = expensiveComputation(n)
safeCache.Set(cacheKey, result, 1*time.Minute)
}
_, err = fmt.Fprintf(w, "Result: %d\n", result)
if err != nil {
return
}
})
log.Fatal(http.ListenAndServe(":8080", nil))
}
Conclusion
In this article, we have demonstrated how to implement a simple, thread-safe cache in Go using the sync.Map package. This cache implementation supports key-value storage with TTL-based expiration, and can be easily integrated into your Go applications to improve performance and reduce the load on your data sources or computation resources.
Level Up Coding
Thanks for being a part of our community! Before you go:
- 👏 Clap for the story and follow the author 👉
- 📰 View more content in the Level Up Coding publication
- 💰 Free coding interview course ⇒ View Course
- 🔔 Follow us: Twitter | LinkedIn | Newsletter
🚀👉 Join the Level Up talent collective and find an amazing job
Thread-Safe Cache in Go with sync.Map was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.
This content originally appeared on Level Up Coding - Medium and was authored by Adam Szpilewicz
Adam Szpilewicz | Sciencx (2023-04-18T20:20:27+00:00) Thread-Safe Cache in Go with sync.Map. Retrieved from https://www.scien.cx/2023/04/18/thread-safe-cache-in-go-with-sync-map/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.