At Codemancers, we believe every day is an opportunity to grow. This section is where our team shares bite-sized discoveries, technical breakthroughs and fascinating nuggets of wisdom we've stumbled upon in our work.
Published
Author
Nived Hari
System Analyst
You can manually send messages to a Kafka topic using Karafka's producer. This is useful for debugging, testing, or custom event handling. Example:
Key Points: • produce_sync ensures the message is sent before proceeding. • topic specifies the Kafka topic where the message will be published. • payload should be serialized into JSON or another supported format. #karafka
Published
Author
Nitturu Baba
System Analyst
Searching in vector databases
1️⃣Convert Text to Embeddings • Text is transformed into numerical vectors using AI models like OpenAI, BERT, or Sentence Transformers. 2️⃣Index & Organise Embeddings • Instead of scanning all vectors, the database groups similar embeddings into clusters (buckets) to speed up search. • Common indexing methods: ◦ HNSW (Hierarchical Navigable Small World) – builds a graph where similar embeddings are connected, reducing search time. ◦ IVFFLAT (Inverted File Index) – divides embeddings into clusters (buckets) and searches only the most relevant ones. 3️⃣Search Using Similarity Metrics • The query is converted into an embedding and compared to stored vectors using: ◦ Cosine Similarity: Cosine Similarity measures the angle between vectors while ignoring their magnitude, where a higher value means greater similarity (1 = identical, 0 = unrelated, -1 = opposite). It is commonly used for text similarity, such as document searches. ◦ Euclidean Distance: Euclidean Distance calculates the straight-line distance between points, where a lower value means greater similarity (0 = identical). This method is ideal for spatial data, like image or geographical searches. • The database searches only the closest clusters, making it faster. 4️⃣Return the Closest Matches • The best matches (top K documents) are ranked and returned based on similarity scores. 📌 Convert text → embeddings, group them into clusters, search only relevant ones, return the top K ranked results.
#vectordatabase
Published
Author
Nitturu Baba
System Analyst
RAG has three key steps: 1️⃣Retrieval – Fetch relevant context from a vector database. 2️⃣Augmentation – Inject the retrieved context into the prompt. 3️⃣Generation – Use an LLM (GPT, Llama, etc.) to produce a fact-based response.
🔹Step 1: Retrieval – Finding Relevant Information Before answering a question, the system searches for relevant documents in a vector database. 💬Example Question:"What is the capital of France?" 🔍Retrieval Process: • The system searches for relevant text in a vector database. • It finds a stored Wikipedia snippet: Paris is the capital of France, known for the Eiffel Tower.
📌Retrieved Context: Paris is the capital of France, known for the Eiffel Tower.
🔹Step 2: Augmentation – Enriching the Prompt with Context After retrieving relevant information, the system adds it to the prompt.
📌Final Augmented Prompt: User Question: "What is the capital of France?" Retrieved Context: "Paris is the capital of France, known for the Eiffel Tower." Final Prompt: "Using the provided context, answer: What is the capital of France?"
👉Why is this useful? ✅Retrieval ensures AI has up-to-date context instead of relying only on pre-trained data. ✅Augmentation refines the LLM’s input, making answers more precise. ✅Reduces hallucinations, ensuring the AI doesn’t generate incorrect facts.
🔹Step 3: Generation – Producing the Final Answer Once the AI has retrieved and augmented the prompt, it generates a final response. 💡Example Output: "The capital of France is Paris, known for the Eiffel Tower and rich history."
#AI #RAG
Published
Author
Adithya Hebbar
System Analyst
Updating Session in NextAuth
In NextAuth, you can update the session data using the update function from useSession(). Here's how you can modify user details dynamically:
Assuming a strategy: "jwt" is used, the update() method will trigger a jwt callback with the trigger: "update" option. You can use this to update the session object on the server.
Code
export default NextAuth({callbacks: {// Using the `...rest` parameter to be able to narrow down the type based on `trigger`jwt({ token,trigger,session }) {if (trigger === "update" && session?.name) {// Note,that `session` can be any arbitrary object,remember to validate it!token.name = session.nametoken.role = session.role }return token } }})
This updates the session without requiring a full reload, ensuring the UI reflects the changes immediately. 🚀
#next-auth #nextjs
Published
Author
Puneeth kumar
System Analyst
Traits in FactoryBot helps to define reusable variations of a factory without creating multiple factories. They are useful when we need optional attributes or specific states in test data. Let's say we have a User model with different roles (admin, regular, guest). Instead of writing separate factories, we can use traits like below:
Ruby
# spec/factories/users.rbFactoryBot.define do factory :user do first_name { Faker::Name.first_name } email { Faker::Internet.unique.email } password { "password123" } trait :admin do role { "admin" } end trait :guest do role { "guest" } end trait :confirmed do confirmed_at { Time.current } end endend
• If item has ordered_quantity, it calls item.ordered_quantity • Otherwise, it calls item.quantity • This avoids unnecessary if-else statements #ruby
Published
Author
Nived Hari
System Analyst
In Ruby, there are two ways to define hash keys:
1. Using the Colon Syntax (:) – Creates a Literal Symbol Key
Ruby
item.update!(ordered_quantity:new_quantity,)
Key Behavior: The key is treated as a fixed symbol (e.g., :ordered_quantity).
2. Using the Hash Rocket (=>) – Evaluates the Left-Hand Side as a Key
Code
item.update!( quantity_field => new_quantity,)
Key Behavior: The left-hand side is evaluated dynamically, making it useful for variable-based keys.
Example Use Case: Dynamic Keys
Code
quantity_field =item.respond_to?(:ordered_quantity) ?:ordered_quantity : :quantitynew_quantity =item.public_send(quantity_field).to_i + item_case[:quantity].to_iitem.update!( quantity_field => new_quantity, # Evaluates to :ordered_quantity or :quantity)
Here, quantity_field is determined dynamically based on the model, so => must be used instead of :.
When to Use =>? • When working with multiple models that have different column names • When dynamically generating hash keys at runtime • When building flexible APIs that handle varying attribute names Takeaway: • Use : when the key is static and always the same. • Use => when the key is stored in a variable or needs to be evaluated dynamically. #ruby
Published
Author
Nived Hari
System Analyst
In Ruby, attr_reader automatically creates a getter method for instance variables, making code cleaner and more concise. Instead of writing:
Ruby
def some_number@some_numberend
You can simply use:
Code
attr_reader :some_number
This makes attributes read-only while keeping the class lightweight
#ruby #CU6U0R822
Published
Author
Nitturu Baba
System Analyst
Real-time AI response streaming improves user experience by reducing wait times and making interactions feel more dynamic. Instead of waiting for the entire response to be generated before displaying it, streaming allows data to be processed and presented incrementally.
Example of AI response streaming using Nest Js backend and Next JS front end.
Setting Up the NestJS Backend for Streaming AI Responses
Controller
TypeScript
import { Controller, Post, Body, Res } from '@nestjs/common';import { openai } from '@ai-sdk/openai';import { streamText } from 'ai';import { Response } from 'express';@Controller('orchestrator')export class OrchestratorController { @Post('chat')async chat(@Body() payload:any, @Res() res:Response) {const { messages } = payload;const result = streamText({model:openai('gpt-4o'),messages, });result.pipeDataStreamToResponse(res); // Streams the AI response directly to the client }}
How It Works • The @Post('chat') endpoint listens for chat requests. • The streamText function sends user messages to OpenAI and receives a streamed response. • pipeDataStreamToResponse(res) directly streams the AI-generated content to the client as it arrives. Building the Next.js Frontend for AI Response Streaming
chat/page.tsx
JavaScript
'use client';import { useChat } from '@ai-sdk/react';export default function Home() {const { messages, input, handleInputChange, handleSubmit } = useChat({api:'https://localhost:3000/api/orchestrator/chat', // make the post request to the NestJS backend });return (<div> {messages.map((message) => (<div key={message.id}> {message.role === 'user' ? 'User:' : 'AI: '} {message.content} </div> ))} <form onSubmit={handleSubmit}> <input name="prompt" value={input} onChange={handleInputChange} className="text-black" /> <button type="submit">Submit</button> </form> </div> );}
How It Works • The useChat hook from AI-SDK manages state and streaming logic automatically. • It sends user messages to the backend and updates the UI in real time as responses arrive. • The messages array dynamically updates, displaying each chunk of AI-generated text as it's received. #C08DPTN3JAW #streaming #next js #nest js
Published
Author
Puneeth kumar
System Analyst
Collection caching is a way to speed up rendering multiple items on a page by storing their HTML in cache. How it works: 1. When we use <%= render partial: 'products/product', collection: @products, cached: true %>, Rails checks if each product's HTML is already stored in the cache. 2. If a product’s HTML is found in the cache, Rails loads it quickly instead of rendering it again. 3. If a product’s HTML is not in the cache, Rails will render it, store it in the cache, and use it next time. 4. The big advantage: Rails fetches all cached products at once (instead of one by one), making it much faster. #CU6U0R822 #caching #collection_caching
Showing page 9 of 83
Ready to Build Something Amazing?
Codemancers can bring your vision to life and help you achieve your goals