Spring AI with DeepSeek R1
Developing a web application that integrates Deepseek R1
Spring AI lets us integrate AI into Spring applications without any complications. Spring AI enables developers who work with Java and Spring to use Generative AI for their projects actively. Spring AI itself was inspired by LangChain and LlamaIndex.
Spring AI Documentation for DeepSeek: https://docs.spring.io/spring-ai/reference/api/chat/deepseek-chat.html
Spring AI provide the following features:
- API support across AI providers for Chat, text-to-image, and Embedding models. Both synchronous and streaming API options have been supported.
- Support for leading LLM like Anthropic Claude, OpenAI ChatGpt, Microsoft Copilot, Meta Ollama and many more.
- Mapping output to POJOs
- Support for all vector database providers such as Cassandra, Cosmos DB, etc.
- Provide API across Vector Store providers that are similar to SQL.
- ETL pipeline
- Help to evaluate generated content and protect against hallucinated response.
- SpringBoot Auto Configuration for Vector Databases and AI Models
- Support for Chat Conversation Memory and RAG.
- Fluent API to communicate with LLMs.
In this article, I’m going to integrate DeepSeek R1 with Spring AI. I’m going to run DeepSeek R1 on a local machine using Ollama.
Ollama Set up
You can download Ollama using this URL: https://ollama.com/download/mac
After you download Ollama now you need to download the DeepSeek R1 model into your local machine. For that run these commands:
ollama pull deepseek-r1:1.5b
ollama run deepseek-r1:1.5b
I chose the DeepSeek 1.5b parameter LLM to save space and resources. But if you have a high-end computer you can go for the highest number of parameter models.
Spring Project Creation
Now you can create a spring project. Add the following dependencies to your POM file.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-ollama-spring-boot-starter</artifactId>
</dependency>
<!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-api -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>2.0.16</version>
</dependency>
We added Thymeleaf, Spring web starter and Spring AI dependencies to our POM file.
We are going to give the users a webpage where they can communicate with the Deepseek LLM. For that, we need to create an API that communicates with DeepSeek.
You can develop this API in two ways. The first is the Synchronous way and the other is the Asynchronous way.
1. Properties File:
spring.ai.ollama.base-url=http://localhost:11434
spring.ai.ollama.chat.options.model=deepseek-r1:1.5b
Need to add the above 2 URLs to your properties file.
1. Synchronize API:
Controller class:
@GetMapping("/prompt")
public String askFromAI(@RequestParam(value = "question") String question) {
try {
return deepSeekAIModel.askToDeepSeekAI(question);
} catch (Exception e) {
LOG.error("There is an error with deep seek AI. Please try again it later");
}
return "Error occured while processing the request";
}
Service class:
private final ChatClient chatClient;
public String askToDeepSeekAI(String request) {
return chatClient.prompt(request).call().content();
}
Using these simple two classes you can create an API that communicates with Spring AI. This is the beauty of the Spring AI.
2. Asynchronize API:
Controller class:
@GetMapping("/promptAsync")
public Flux<String> askFromAIAsync(@RequestParam(value = "question") String question) {
return deepSeekAIModel.askToDeepSeekAIWithStream(question);
}
Service class:
private final ChatClient chatClient;
public Flux<String> askToDeepSeekAIWithStream(String question) {
return chatClient.prompt(question).stream().content();
}
The ChatClient offers a fluent API for communicating with an AI model. It supports both a synchronous and streaming programming model. The fluent API has methods for building up the constituent parts of a Prompt that is passed to the AI model as input.
We can create this using a ChatClient.Builder object. You can obtain an autoconfigured ChatClient.Builder instance for any ChatModel Spring Boot autoconfiguration or create one programmatically.
If ChatClient bean didn’t autowire correctly use this config class:
@Configuration
public class AIConfig {
@Bean
public ChatClient chatClient(ChatModel chatModel) {
return ChatClient.builder(chatModel).build();
}
}
3. Thymeleaf Template:
I configured the Thymeleaf webpage to ask questions to the user and give responses back to the user.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>AI Chat - DeepSeek R1</title>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css">
<style>
body {
background-color: #f8f9fa;
font-family: Arial, sans-serif;
}
.container {
max-width: 600px;
margin-top: 50px;
padding: 20px;
background: white;
border-radius: 8px;
box-shadow: 0px 0px 10px rgba(0, 0, 0, 0.1);
}
#response-container {
margin-top: 20px;
white-space: pre-wrap;
background: #e9ecef;
padding: 15px;
border-radius: 5px;
min-height: 50px;
}
#loading {
display: none;
margin-top: 10px;
text-align: center;
font-weight: bold;
color: #007bff;
}
</style>
</head>
<body>
<div class="container">
<h3 class="text-center">AI Chat - DeepSeek R1</h3>
<div class="mb-3">
<label for="question" class="form-label">Enter your prompt:</label>
<input type="text" id="question" class="form-control" placeholder="Ask something..." required>
</div>
<button onclick="sendPrompt()" class="btn btn-primary w-100">Ask AI</button>
<div id="loading">⏳ Thinking...</div>
<div id="response-container"></div>
</div>
<script>
async function sendPrompt() {
const question = document.getElementById("question").value.trim();
const responseContainer = document.getElementById("response-container");
const loading = document.getElementById("loading");
if (!question) {
alert("Please enter a question.");
return;
}
// Show loading state
responseContainer.innerHTML = "";
loading.style.display = "block";
try {
const response = await fetch(`/api/ai/prompt?question=${encodeURIComponent(question)}`);
const result = await response.text();
// Hide loading and display response
loading.style.display = "none";
responseContainer.innerText = result;
} catch (error) {
loading.style.display = "none";
responseContainer.innerHTML = `<span style="color:red;">Error: Unable to fetch response.</span>`;
}
}
</script>
</body>
</html>
This is the place where I serve webpage:
@Controller
public class WebController {
@GetMapping("/")
public String home() {
return "index";
}
}
Now you can see an interface like this and you can ask questions and get answers using that.

Conclusion
I just scratched the surface of the Spring AI. But It seems very solid and as Java and Spring developers, now we can easily work with LLMs thanks to Spring AI.
You can find the GitHub repo here for this article: https://github.com/CDinuwan/spring-ai-with-deepseek.git
Happy Coding 🚀!
Thank you for being a part of the community
Before you go:
- Be sure to clap and follow the writer ️👏️️
- Follow us: X | LinkedIn | YouTube | Newsletter | Podcast
- Check out CoFeed, the smart way to stay up-to-date with the latest in tech 🧪
- Start your own free AI-powered blog on Differ 🚀
- Join our content creators community on Discord 🧑🏻💻
- For more content, visit plainenglish.io + stackademic.com