Redis Basics for Practical Use - Quick Interview Notes (Crash Course)
Table of Content
1. Introduction to Redis
Redis is a remote in-memory database with the following key characteristics:
- Handles millions of requests per second
- A non-relational database that stores data in key-value pairs
- Stores all data in memory
- Can form Redis clusters to achieve three core benefits: high performance, high availability, and high concurrency
- Single-threaded: A single Redis node processes only one user command at a time. (Note: This doesn’t mean Redis only uses one thread overall; background threads handle tasks like checking for expired data.)
Redis provides five primary data structures:
- String: Can store strings, integers, or floating-point numbers.
- List: A linked list.
- Set: An unordered collection where values are unique.
- Hash: A hash table, similar to Java’s
HashMap
. - ZSet (Sorted Set): An ordered set, where elements are sorted by a
score
. Smaller scores appear first.
In addition to its core data structures, Redis offers the following features:
- Key Expiration:Keys can have an expiration time set. Once expired, they are automatically deleted.
- Publish/Subscribe:Can act as a simple message queue.
- Lua Scripting:Allows you to create custom Redis commands for complex functionalities. Lua scripts execute atomically.
- Basic Transactions:Ensures a batch of commands is executed as a single atomic operation.
- Pipeline: Enables sending multiple commands at once to reduce network overhead. However, commands in a pipeline are not atomic—if one fails, others will still execute independently.
Common Use Cases for Redis:
- Tracking Likes for Videos or Posts: Use Redis
Set
to store user IDs for likes. Adding a like inserts a user ID into the set, and removing a like deletes the user ID. For inactive content, remove the data from Redis and persist it to a database. Memory Usage: Suppose user IDs are 10 characters long, with an average of 5,000 likes per video, and 100,000 active videos. The estimated memory usage would be:10 bytes * 5,000 likes * 100,000 videos / 1024 / 1024 / 1024 ≈ 4.65 GB
. This is well within Redis’s capacity. - Storing User Sessions:After a user logs in, save the generated token in Redis and set an expiration time. When a user makes a request, validate the token. If valid, process the request; if expired, prompt the user to log in again.
- Data Caching: For data that is frequently read but rarely updated, cache it in Redis to: ① Improve response times for users; ② Reduce the load on the database. Be aware, however, that caching may introduce data consistency issues.
- Distributed Locks:Use the
SETEX
command to attempt setting a key with a timeout. If successful, the lock is acquired; otherwise, the lock is already held, and the system will wait or retry. Always set a timeout to prevent deadlocks. Release the lock by deleting the key once the task is complete. - Counters:Use the
INCR <key>
command to implement counters, such as tracking the number of clicks or visits.
2. Common Redis Commands
Global Commands:
```python keys * # List all keys. This performs a full scan of the database, which is not recommended in production environments. Use `scan` instead. dbsize # Get the total number of keys. This has O(1) time complexity and is safe to use. exists <key> # Check if a specific key exists. # Iterate through keys that match a [pattern], starting from a specific <cursor>, and retrieve up to [count] keys. # Example: scan 3000 time* 1000 # Start from cursor 3000, find keys starting with "time", and return up to 1000 keys. # `scan` returns a list of matching keys along with the cursor position, which can be used to continue the scan. scan <cursor> [match pattern] [count number] ```
String Commands:
```python get <key> # Retrieve the value of a key. set <key> <value> # Set a key to a specific value. incr <key> # Increment the integer value of a key by 1. decr <key> # Decrement the integer value of a key by 1. ```
List:
```python rpush <key> <value> [<value>, ...] # Add one or more elements to the end of a list. lpush <key> <value> [<value>, ...] # Add one or more elements to the beginning of a list. rpop <key> # Remove and return the last element of a list. lpop <key> # Remove and return the first element of a list. lindex <key> <offset> # Retrieve the element at a specific index in the list. lrange <key> <start> <end> # Retrieve a range of elements from a list, including both `start` and `end`. ```
Set Commands:
```python sadd <key> <value> [<value>, ...] # Add one or more elements to a set. srem <key> <value> [<value>, ...] # Remove one or more elements from a set. sismember <key> <value> # Check if a value is a member of a set. scard <key> # Get the total number of elements in a set. ```
Hash Commands:
```python hmget <key> <k> [<k>, ...] # Retrieve one or more fields from a hash. hmset <key> <k> <v> [<k> <v> ...] # Set one or more field-value pairs in a hash. hdel <key> <k> [<k>, ...] # Delete one or more fields from a hash. hlen <key> # Get the total number of fields in a hash. ```
ZSet (Sorted Set) Commands:
```python zadd <key> <score> <value> [<score> <value>, ...] # Add one or more elements with scores to a sorted set. zrem <key> <value> [<value>, ...] # Remove one or more elements from a sorted set. zcard <key> # Get the total number of elements in a sorted set. ```
3. Using Java to Work with Redis
Redis interacts with clients via the TCP protocol. On top of this, it defines a standardized protocol called RESP (REdis Serialization Protocol). Essentially, RESP introduces specific formatting rules to the TCP packets being transmitted. For example, a successful response begins with the character “+”, while an error response starts with “-”.
There are many Java libraries available for interacting with Redis. Below, we’ll explore them one by one.
3.1 Jedis
For Java applications, Jedis is one of the most commonly used clients for interacting with Redis. It supports all the basic Redis commands.
Here’s a simple example of how to use it:
```java Jedis jedis = new Jedis("127.0.0.1", 6379); jedis.set("hello", "world"); ```
In production environments, it is recommended to use a connection pool to create Jedis
instances. Here's an example:
```java GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig(); // Initialize connection pool configuration JedisPool jedisPool = new JedisPool(poolConfig, "127.0.0.1", 6379); Jedis jedis = jedisPool.getResource(); ```
Common Connection Pool Configurations:
- maxActive: The maximum number of active connections.
- maxIdle:The maximum number of idle connections.
- minIdle:The minimum number of idle connections.
- maxWaitMillis:The maximum wait time for a caller when the pool is exhausted.
- minEvictableIdleTimeMillis:The minimum idle time for a connection. Connections idle for longer than this will be released.
If you are working with a Redis cluster, you should use JedisCluster
for operations. Here’s an example:
```java Set<HostAndPort> jedisClusterNodes = new HashSet<>(); // You can only configure one node, but it’s recommended to include all nodes. jedisClusterNodes.add(new HostAndPort("127.0.0.1", 7000)); jedisClusterNodes.add(new HostAndPort("127.0.0.1", 7001)); JedisCluster jedisCluster = new JedisCluster(jedisClusterNodes); jedisCluster.set("key", "value"); ```
3.2 Lettuce
Lettuce provides a comprehensive API for interacting with Redis and supports non-blocking (asynchronous) command execution. Compared to Jedis, Lettuce is more feature-rich and offers better performance.
Here’s an example of using Lettuce:
```java import io.lettuce.core.RedisClient; import io.lettuce.core.api.StatefulRedisConnection; import io.lettuce.core.api.sync.RedisCommands; // Create a Redis client RedisClient redisClient = RedisClient.create("redis://127.0.0.1:6379"); // Establish a connection StatefulRedisConnection<String, String> connection = redisClient.connect(); // Access Redis using synchronous commands RedisCommands<String, String> syncCommands = connection.sync(); syncCommands.set("myKey", "Hello, Lettuce!"); String value = syncCommands.get("myKey"); // Close the connection and shutdown the client connection.close(); redisClient.shutdown(); ```
3.3 Redisson
Compared to Jedis and Lettuce, Redisson not only provides basic Redis APIs but also includes more advanced features, such as distributed locks.
```java import org.redisson.Redisson; import org.redisson.api.RLock; import org.redisson.api.RedissonClient; import org.redisson.config.Config; Config config = new Config(); config.useSingleServer().setAddress("redis://127.0.0.1:6379"); RedissonClient redisson = Redisson.create(config); RLock lock = redisson.getLock("myLock"); try { // Acquire the lock lock.lock(); // ... do something } finally { // Release the lock lock.unlock(); } ```
3.4 RestTemplate in Spring Boot
In a Spring Boot project, you can use RedisTemplate
to interact with Redis. RedisTemplate
is an abstraction built on top of libraries like Jedis and Lettuce, giving you the flexibility to choose the underlying Redis client.
Here's an example of using RedisTemplate:
```java import org.springframework.beans.factory.annotation.Autowired; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.stereotype.Service; @Service public class RedisExampleService { @Autowired private RedisTemplate<String, String> redisTemplate; public void doSomething() { redisTemplate.opsForValue().set("key1", "value1"); redisTemplate.opsForValue().get("key1"); redisTemplate.delete("key1"); } } ```
If you need to customize the default connection used by RedisTemplate
, you can define the following Bean
configuration:
```java @Bean public RedisConnectionFactory redisConnectionFactory() { LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(); lettuceConnectionFactory.setHostName("localhost"); lettuceConnectionFactory.setPort(6379); // Additional configuration if needed return lettuceConnectionFactory; } ```
RedisConnectionFactory
is an interface with three main implementations: Jedis, Lettuce, and Redisson.
3.5 Comparison of Jedis, Lettuce, and Redisson
Jedis | Lettuce | Redission | |
---|---|---|---|
Performance | Medium | High | High |
Asynchronous Support | × | √ | √ |
Ease of Use | Easy | Moderate | Moderate |
Redis API Coverage | Basic API | Basic API | Basic API + Advanced Features (e.g., distributed locks) |
Thread Safety | × | √ | √ |
4. Typical Applications of Redis
4.1 Data Caching in SpringBoot
Background: In read-heavy, write-light scenarios, directly querying the database for every request can significantly increase database load and potentially lead to crashes. To mitigate this, storing infrequently updated data in a cache can reduce database pressure.
In a SpringBoot project, the @Cacheable
annotation makes it straightforward to use caching. For example:
```java import org.springframework.cache.annotation.Cacheable; import org.springframework.stereotype.Service; @Service public class MyService { @Cacheable("myCache") // Name of the cache public MyObject getData(String key) { // do something ... return new MyObject(); } } ```
When the @Cacheable
annotation is added to the getData(...)
method, Spring automatically handles caching. If cached data exists, it is returned directly, bypassing the execution of the method's logic.
Steps to Configure Caching in SpringBoot:
(1) Add Spring Redis Dependency
```xml <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency> ```
(2) Add Redis configuration in application.properties
or application.yml
, such as:
``` spring.redis.host=127.0.0.1 spring.redis.port=6379 spring.redis.password=yourPassword ```
(3) Annotate the main class with @EnableCaching
to activate caching functionality:
```java @SpringBootApplication @EnableCaching public class CachingApplication { public static void main(String[] args) { SpringApplication.run(CachingApplication.class, args); } } ```
(4) Use Cache Annotations in Business Logic. Spring provides the following annotations to handle caching:
@CachePut
(for adding or updating cache entries)@Cacheable
(for fetching and caching results)@CacheEvict
(for removing cache entries)
Example:
```java @Service public class MyService { private Map<String, String> database = new HashMap<>(); // Adds or updates a cache entry. // For example, if `name` is Amy, the cache key is `personCache::Amy`. @CachePut("personCache") public String addOrUpdatePerson(String name) { database.put(name, "hello, " + name); return database.get(name); } // Retrieves data from the cache. // If not cached, the method executes and the result is stored in the cache. @Cacheable("personCache") public String getPerson(String name) throws Exception { Thread.sleep(3000); return database.get(name); } // Removes a specific cache entry. @CacheEvict("personCache") public void deletePerson(String name) { database.remove(name); } // Clears all entries in the `personCache`, i.e. the key starts with "personCache::" @CacheEvict(value = "personCache", allEntries=true) public void deleteAllPerson() { database.clear(); } } ```
Furthermore, Spring provides the
@Caching
annotation, which allows combining multiple caching annotations. This is particularly useful for scenarios where a single method requires a mix of caching behaviors.
4.2 Distributed Lock
Background: Modern business systems are typically deployed as clusters. If a user performs an action, such as clicking twice consecutively, this can result in two different machines processing the same task simultaneously, leading to data inconsistencies. To handle this, we need distributed locks to control concurrency across machines. Thanks to Redis's single-threaded model and high throughput, it is well-suited for implementing distributed locks.
The core of implementing distributed locks with Redis lies in the
SETNX <key> <value>
command: if the key already exists, the operation fails; if the key does not exist, the operation succeeds.In production, it is not recommended to implement distributed locks yourself. Instead, use Redisson's distributed lock, which provides a more robust and feature-rich solution.
A distributed lock typically involves three key components:
- Lock Construction: This step involves creating the lock object and defining its essential properties such as
lockKey
,lockValue
, andtimeout
. Key considerations include:- Unique Keys:: The
lockKey
must be unique. When using the lock, ensure that the keys are carefully designed to avoid conflicts across different business scenarios. Note that this responsibility lies with the user, not the distributed lock itself.
- Unique Keys:: The
- Acquiring the Lock: This step attempts to obtain the lock. If the lock cannot be acquired, the system either reports a failure or waits in a spin loop. Key considerations include:
- Timeout: Locks should have a timeout to prevent deadlocks. For example, if the machine holding the lock crashes, the lock should expire automatically.
- Atomic Operations:: The
SETNX
operation (to create the lock) and theEXPIRE
operation (to set its expiration time) must be atomic. Otherwise, if the process crashes afterSETNX
but beforeEXPIRE
, the lock would not release properly. To ensure atomicity, consider using pipelines or Lua scripts. - Failure Handling or Spinning: If acquiring the lock fails, the business logic should decide whether to retry (spin) or return a failure.
- Releasing the Lock: Releasing the lock involves deleting the
lockKey
once the task is completed.
Implementing a Simple Distributed Lock with RedisTemplate
:
```java import org.springframework.beans.factory.annotation.Autowired; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.stereotype.Component; import java.time.Duration; @Component public class RedisDistributedLockBuilder { @Autowired private RedisTemplate<String, String> redisTemplate; // Constructs a lock object with customizable settings public Lock build(String lockKey, String lockValue, boolean spin, long timeout, long waitTime ) { Lock lock = new Lock(); lock.redisTemplate = redisTemplate; lock.lockKey = lockKey; lock.lockValue = lockValue; lock.spin = spin; lock.timeout = timeout; lock.waitTime = waitTime; return lock; } // Inner Lock class for handling the locking mechanism public static class Lock { private RedisTemplate redisTemplate; private String lockKey; // The key used for the lock in Redis private String lockValue; // The value for the lock, typically "1" private boolean spin; // Whether to retry acquiring the lock (spinning) private long timeout; // Lock expiration time (in milliseconds) private long waitTime; // Time to wait between retries while spinning (in milliseconds) // Method to acquire the lock public boolean acquireLock() throws InterruptedException { while (true) { // Attempt to acquire the lock atomically using setIfAbsent. // `setIfAbsent` methond use pipeline to implement `setnx+expire` command. boolean result = redisTemplate.opsForValue() .setIfAbsent(lockKey, lockValue, Duration.ofMillis(timeout)); if (!spin) { // If spinning is disabled, return the result of the lock acquisition return result; } if (result) { // If the lock is successfully acquired, return true return result; } // If lock acquisition fails, wait for the specified time before retrying Thread.sleep(waitTime); } } // release the lock public void releaseLock() { redisTemplate.delete(lockKey); } } } ```
Example Usage:
```java @RestController @RequestMapping("test") public class TestController { @Autowired private RedisDistributedLockBuilder distributedLockBuilder; @RequestMapping("/createOrder") public void createOrder(@RequestParam String orderCode) throws Exception { RedisDistributedLockBuilder.Lock lock = distributedLockBuilder.build("lock_createOrder_" + orderCode, "1", false, // No spinning 10 * 1000, 0); boolean result = lock.acquireLock(); if (!result) { throw new Exception("Please do not click repeatedly!"); } try { // ... do something Thread.sleep(3000); } finally { lock.releaseLock(); } } @RequestMapping("/printDoc") // Printer task public void printDoc(@RequestParam String docId, String printerId) throws Exception { RedisDistributedLockBuilder.Lock lock = distributedLockBuilder.build("lock_printer_" + printerId, "1", true, // Enable spinning to wait for the lock 60 * 1000, // Lock expiration time set to 60 seconds 10 // During spinning, rest for 10 milliseconds between attempts ); // Acquire the lock; if not available, keep waiting until it can be acquired lock.acquireLock(); try { // ... Process the printing task Thread.sleep(3000); } finally { lock.releaseLock(); } } } ```
5. Common Application Issues with Redis
Redis is often used as a database caching solution, but improper usage can lead to excessive database load or even crashes.
A typical read-write caching process works as follows:
- Check if the requested data exists in the cache. If it does, retrieve it directly from the cache and return the result.
- If the data is not in the cache, retrieve it from the database, write it to the cache, and return the result.
Since this design is straightforward, it introduces several potential pitfalls, such as consistency issues and excessive database load under certain conditions.
5.1 Data Consistency
Background: In many business scenarios, cached data may be subject to simultaneous reads and writes. For example, in ticket-booking systems like train ticket sales, data is frequently updated.
Risk: During the brief interval between "writing data to the database" and "updating the cache", requests might fetch outdated data from the cache, leading to inconsistencies between the database and the cache.
Based on the required level of strictness, cache consistency can be categorized as follows:
- Strong Consistency: The cache and database must always remain perfectly synchronized, without any discrepancies. This approach is suited for scenarios with strict consistency requirements and a high read-to-write ratio.
- Weak Consistency: After a data update, the system does not guarantee when the cache will reflect the latest data. However, if the system can ensure that the data will eventually be consistent within a specific timeframe, this is referred to as Eventual Consistency, a special case of weak consistency.
For scenarios without strict consistency requirements or those with frequent reads and writes, weak consistency is generally sufficient. For example, the 12306 ticketing system (China's official ticketing system that handles massive requests everyday) handles high read and write requests. It often shows tickets as available but may fail to complete the booking due to stock actually being sold out. Strong consistency would be impractical in such a case.
The Solution for Strong Consistency:Achieving strong consistency is rare. However, in cases with high read-to-write ratios and strict requirements, it can be implemented using locks. For example, during the process of "deleting the cache and updating the database," the entire operation is locked. During this period: ① All queries bypass the cache and directly access the database; ② No data is written to the cache until the update is complete. This ensures that other threads cannot read or write the cache during the data is being updated.
Solutions for Eventual Consistency:
- Delete the cache after updating the database: The next time the data is requested, it will be written back to the cache. This approach is suitable for scenarios with a high read-to-write ratio.
- Rely solely on cache expiration policies: In scenarios with frequent reads and writes, this approach works well. For instance, in ticket-booking systems like 12306, the cache may display outdated ticket availability for some time, even after tickets are sold out.
Risky Strategies to Avoid:
- Update the cache first, then update the database: If the database update fails, the cache becomes inconsistent.
- Delete the cache first, then update the database: If a read request occurs between these two steps, the cache will return outdated data.
- Update the database first, then update the cache: If the cache update fails, the cache becomes stale and inconsistent with the database.
5.2 Cache Breakdown
Background:In some systems, there may be a hotspot key that experiences extremely high read traffic.
Risk:If this key expires, there will be a short period during which the cache in Redis does not contain this key until it is repopulated. Due to the high read demand for this key, this can result in a sudden surge of requests hitting the database, potentially causing the database to become overwhelmed or fail.
Solutions:
- Disable expiration for hotspot keys: Instead of setting an expiration time, update the cache only when the key is modified. To ensure only one thread updates the cache at a time, you can use a distributed lock.
- Cache preloading:Before a system goes live, preload hotspot keys into the cache to prevent a flood of traffic hitting the database when the system launches.
5.3 Cache Penetration
Background: Some APIs allow data retrieval based on IDs.
Risk: If a large number of requests target nonexistent IDs, the absence of data means these requests will miss the cache and directly hit the database. This can happen in scenarios like: ① Malicious attacks that attempt to iterate over all possible IDs. ② Web crawlers systematically querying large datasets.
Solutions:
- Cache null results: For example, if
ID=1234
has no corresponding data in database, store a cache entry likeID_1234: None
to avoid repeated database queries. - Use a Bloom filter: A Bloom filter helps determine if data might exist or definitely does not exist, effectively preventing most invalid queries from reaching the database.
5.4 Cache Avalanche
Background: In systems where a large amount of data is cached with expiration times, certain issues may arise.
Risk: ① If a significant portion of the cached data has the same expiration time, it can result in a massive number of cache expirations simultaneously, leading to a flood of database requests; ② If Redis goes down, all requests will bypass the cache and directly hit the database.
Solutions:
- Stagger expiration times: For different types of business data, set different expiration times based on specific requirements.
- Add randomization to expiration times: For the same type of data, add a random offset to expiration times to distribute expirations more evenly.
- Implement circuit breakers and fallback mechanisms: Limit database requests to prevent overwhelming the system. For rejected requests, apply fallback strategies such as returning an error message or using a cold cache (see dual-layer cache).
- Dual-layer caching: Use two layers of Redis. The first layer (hot cache) has a short expiration time and stays highly consistent with the database. The second layer (cold cache) has a longer expiration time but less consistency. If the database becomes unavailable, fallback mechanisms can serve data from the cold cache.
- Cache preloading: Preload data into the cache before launching the system to prevent an initial flood of requests from hitting the database.
- High-availability architecture: To address Redis outages, adopt a master-slave cluster setup to maintain availability and avoid single points of failure.
6. Advanced Data Type in Redis
Beyond the basic five data type, Redis supports several advanced data type in its later versions.
6.1 Bitmap
Bitmaps: Represent a sequence of bits (0s and 1s) and are particularly efficient for performing bit-level operations. A common use case is implementing Bloom Filters.
Basic Usage of Bitmaps:
```python # Set the bit at a specific offset for a given key to 0 or 1 setbit <key> <offset> <0|1> # Get the bit value at a specific offset for a given key (returns 0 or 1). # If the key doesn't exist or the bit hasn't been explicitly set, it returns 0. getbit <key> <offset> # Count the number of bits set to 1 in the value of a key bitcount <key> # Perform bitwise operations (AND, OR, XOR, NOT) on one or more keys and store the result in a destination key bitop <and|or|xor|not> <destkey> key1 [key2, ...] ```
Underlying Principles: Bitmaps are essentially stored as strings. Redis internally maps the string into a binary format for processing. You can even mix Bitmap commands with string commands. For example:
```python > set k1 "ab" OK > bitcount k1 6 ```
Here, the result is 6
because the ASCII values of a
(97) and b
(98) are 0110 0001
and 0110 0010
in binary, which contain a total of six 1
s.
6.2 HyperLogLog for Approximate Cardinality
HyperLogLog is designed for estimating the cardinality of large datasets. Cardinality refers to the number of unique elements in a set.
In Redis, HyperLogLog can estimate the cardinality of up to $2^{64}$ unique elements while using only 12 KB of memory.
Basic Usage of HyperLogLog:
```python # Add one or more elements to a HyperLogLog key pfadd <key> <element> [<element>, ...] # Estimate the cardinality (number of unique elements) of one or more HyperLogLog keys pfcount <key> [<key>, ...] # Merge multiple HyperLogLog keys into a new destination key pfmerge <destkey> <sourcekey> [<sourcekey>, ...] ```
Example Application: Monthly Active Users (MAU): For instance, if Youtube wants to calculate its monthly active users, it could execute pfadd <key-month> <userid>
for each user interaction. At the end of the month, a simple pfcount <key-month>
gives the estimated number of unique users. Since HyperLogLog doesn't store actual user IDs, its memory usage remains minimal.
6.3 GEO: Geospatial Information
The GEO feature in Redis is used to store geographical location data and perform operations on it.
Basic Usage of GEO:
```python # Add geographical locations with longitude, latitude, and a name. Supports batch additions. geoadd <key> <longitude> <latitude> <member> [<longitude> <latitude> <member> ...] # Retrieve the geographical coordinates (longitude and latitude) for one or more locations geopos <key> <member> [<member> ...] # Calculate the distance between two locations with an optional unit of measurement geodist <key> <member1> <member2> [m|km|ft|mi] # Find locations within a radius from a given longitude and latitude georadius <key> <longitude> <latitude> <radius> <m|km|ft|mi> # Find locations within a radius from a given location name georadiusbymember <key> <member> <radius> <m|km|ft|mi> ```
Example Usage:
```python # Add some landmarks in Beijing > geoadd Beijing 116.397469 39.908821 TianAnMen # Tiananmen Square (integer) 1 # Add Peking University (PKU) and Tsinghua University (THU) > geoadd Beijing 116.316833 39.998877 PKU 116.337180 39.971874 THU (integer) 2 # Retrieve the coordinates of Peking University > geopos Beijing PKU 1) 1) "116.31683439016342" 2) "39.998877029375571" # Calculate the distance between PKU and THU > geodist Beijing PKU THU km "3.4680" # Find locations within a 10 km radius of specific longitude and latitude > georadius Beijing 116.310547 39.992828 10 km 1) "THU" 2) "PKU" # Find locations within a 10 km radius of PKU by name > georadiusbymember Beijing PKU 10 km 1) "THU" 2) "PKU" ```
Refereces
- Redis in Action(Josiah L. Carlson)