AI Talks
Upcoming

AI Talks

Tech Hub Conference Room, San Francisco

Weekly discussions on the latest advancements in artificial intelligence and machine learning. Join us every Saturday at 6:00 PM for presentations, discussions, and networking with fellow AI enthusiasts. Each session focuses on a specific topic in AI, from foundational concepts to cutting-edge research. Sessions are led by experts in the field and include time for Q&A and open discussion.

Related Content

Reinforcement Learning (RL)Open-source AICost-effective LLM

Session 13: DeepSeek

DeepSeek-R1, an open-source large language model developed by DeepSeek AI Lab, has quickly gained traction due to its innovative use of pure Reinforcement Learning (RL) instead of Supervised Fine-Tuning (SFT), enabling autonomous reasoning improvements. With strong performance benchmarks, cost-effective training, and local deployment capabilities, DeepSeek-R1 presents a competitive alternative to larger commercial models while also facing challenges like jailbreaking vulnerabilities.

Read more
Multi-Agent SystemsAI CollaborationAutomation

Session 10: Multi-Agent Frameworks, Let agents talk

In our 10th session of "AI Talks," we explored multi-agent frameworks, their structures, and their role in enabling agents to collaborate efficiently for automation and decision-making. We reviewed key frameworks like LangGraph, LlamaIndex Workflow, Eliza, and OpenAI Swarm, along with a supervisor-driven flow-based architecture, highlighting how these systems enhance AI capabilities by optimizing task distribution and coordination.

Read more
RAG evaluationAutomated metricsEthical considerations

Session 9: Evaluating the Generation Part of RAG Pipelines

The ninth session of *AI Talks* focused on evaluating the generation component of RAG pipelines, highlighting key challenges like hallucinations, coherence, and response latency while stressing the need to assess both retrieval and generation stages. Various evaluation methods, including automated metrics (BLEU, ROUGE, GPT-4-based scoring), human evaluation, A/B testing, and automation tools like OpenAI evals, were explored alongside ethical considerations for fairness and bias mitigation.

Read more
Retrieval QualityEvaluation MetricsRelevancy Analysis

Session 8: Evaluating RAG pipelines

RAG pipeline evaluation involves assessing retrieval and generation phases using traditional IR metrics (accuracy, precision, recall) and NLP evaluation methods (BLEU, ROUGE, METEOR). Evaluation is conducted through offline (static dataset) and online (live feedback) methods with similarity and relevancy score analysis used to ensure retrieval quality, leveraging statistical measures and LLM-based assessments for reliability and alignment.

Read more
Multimodal RetrievalColPaliVision-Language Models

Session 6: Retrieval with Vision Language Model

The discussion covered Chain of Thought (CoT) reasoning, which enhances LLM problem-solving by breaking down complex queries into logical steps, and function calling, which enables LLMs to interact with external systems like APIs and databases. It concluded with an exploration of four function calling types, addressing challenges like latency, error handling, and security, reinforced through hands-on coding demonstrations.

Read more
Prompting TechniquesAI Model ParametersChain-of-Thought (CoT)

Session 5: Prompt Engineering

The promptingguide.ai documentation, as discussed in our AI Talks meeting, covers key parameters for controlling AI outputs, such as Temperature and Top P for randomness, frequency and presence penalties for repetition, and basic constraints like max length and stop sequences. It also emphasizes clear and precise prompt design, using separators and direct instructions, while highlighting few-shot prompting limitations and introducing chain-of-thought (CoT) prompting for complex reasoning tasks in large models.

Read more
AILLMInformation Retrieval

Session 3: RAG, Retrieval & Generation

The meeting covered Retrieval Augmented Generation (RAG) as a method to enhance Large Language Models (LLMs) with up-to-date knowledge by integrating retrieval and generation processes. Key aspects included data ingestion with quality control, retrieval using dense, sparse, and hybrid methods, and response generation optimized through prompt engineering and evaluation metrics for accuracy and contextual relevance.

Read more
AIRAG

Session 1: Retrieval Augmented Generation (RAG)

The meeting covered the necessity of RAG in addressing generative model limitations like outdated knowledge, hallucinations, and private data leaks, explaining how a retriever-generator framework enhances response accuracy. It also explored technical implementation, applications across different modalities, and challenges such as memory constraints and data ingestion, highlighting key tools like FAISS, Qdrant, LlamaIndex, and LangChain for effective RAG deployment.

Read more