
What we’re about
Silicon Valley Generative AI is a dynamic community of professionals, researchers, startup founders, and enthusiasts who share a passion for generative AI technology. As part of the wider GenAI Collective network, the group provides a fertile ground for the exploration of cutting-edge research, applications, and discussions on all things related to generative AI.
Our community thrives on two main types of engagement. Firstly, in partnership with Boulder Data Science, we host bi-weekly "Paper Reading" sessions. These meetings are designed for deep-dives into the latest machine learning papers, fostering a culture of continuous learning and collaborative research. It's an excellent opportunity for anyone looking to understand the nitty-gritty scientific advancements propelling the field forward.
Secondly, we organize monthly "Talks" that offer a broader range of insights into the world of generative AI. These sessions feature presentations by an eclectic mix of speakers, from industry pioneers and esteemed researchers to emergent startup founders and subject matter experts. Unlike the paper reading sessions, which are more academically inclined, the talks are tailored to appeal to a more general audience. Topics can span the gamut from the technical intricacies of the latest generative models to their real-world applications, startup pitches, and even discussions on the legal and ethical implications of AI.
Whether you're a seasoned professional or merely curious about generative AI, Silicon Valley Generative AI provides a comprehensive platform to learn, discuss, and network.
We strive to be an inclusive community that fosters innovation, knowledge-sharing, and a collective drive to shape the future of AI responsibly. Join us to stay at the forefront of generative AI research, news, and applications.
For those eager to dive deeper into the technical aspects, you can join us on the GenAI Collective Slack, specifically the #discuss-technical channel, to keep the conversations flowing between meetups.
We are also looking for the following:
• Readers: people who are willing to read papers and speak about them.
• Speakers and presenters: who will put together educational materials and present to the group as well as answer questions.
• Industry events: if you have a generative AI event like a hackathon, lunch and learn or an information session on your product, we would be happy to include in the calendar.
Please contact Matt White here or at contact@matt-white.com
Upcoming events (4+)
See all- Generative AI Paper Reading:"Deconstructing Long Chain-of-ThoughtLink visible for attendees
## Details
"Deconstructing Long Chain-of-Thought: A Structured Reasoning Optimization Framework for Long CoT Distillation" (Yijia Luo et al., 2025)
arXiv Paper
Discussion Topics:
Long Chain-of-Thought (CoT) Distillation- Challenges in transferring reasoning capabilities across heterogeneous models
- Analysis of reasoning structures: linear, tree, and network-based patterns
- Overthinking phenomenon and its impact on smaller student models
DLCoT Framework
- Intelligent segmentation of reasoning chains into modular components
- Redundancy elimination to improve token efficiency and accuracy
- Error correction strategies for optimizing intermediate reasoning steps
Performance Benchmarks
| Metric | DLCoT Models | Baseline Models | Improvement |
| ------ | ------------ | --------------- | ----------- |
| Token Efficiency | +28% | Standard CoT | Significant |
| Accuracy (GSM8K) | 96.8% | 93.5% | +3.3% |
Implementation Challenges- Balancing structural richness with token efficiency
- Addressing logical coherence during redundancy reduction
- Ensuring compatibility across diverse model architectures
Key Technical Features
- Focus on reasoning trunk diversity rather than exhaustive exploration paths
- Experimental validation across Qwen2.5 and Llama3.1 series models
- Practical a
---
Silicon Valley Generative AI has two meeting formats.1. Paper Reading - Every second week we meet to discuss machine learning papers. This is a collaboration between Silicon Valley Generative AI and Boulder Data Science.
2. Talks - Once a month we meet to have someone present on a topic related to generative AI. Speakers can range from industry leaders, researchers, startup founders, subject matter experts and those with an interest in a topic and would like to share. Topics vary from technical to business focused. They can be on how the latest in generative models work and how they can be used, applications and adoption of generative AI, demos of projects and startup pitches or legal and ethical topics. The talks are meant to be inclusive and for a more general audience compared to the paper readings.
If you would like to be a speaker please contact:
Matt White - Reinforcement Learning: Topic TBALink visible for attendees
Typically covers chapter content from Sutton and Barto's RL book
As usual you can find below links to the textbook, previous chapter notes, slides, and recordings of some of the previous meetings.
Useful Links:
Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto
Recordings of Previous Meetings
My exercise solutions and chapter notes
Kickoff Slides which contain other links
Video lectures from a similar course - Generative AI Paper Reading:: Explicit Modeling of Uncertainty w/ idk tokenLink visible for attendees
Join us for a paper discussion on "I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token"
Analyzing hallucination reduction through dedicated uncertainty tokens in language models
Featured Paper:
"I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token" (Author et al., 2024)
arXiv Paper
Performance Benchmarks| Metric | [IDK] Models | Baseline Models | Improvement |
| ------ | ------------ | --------------- | ----------- |
| Hallucination Rate | 12.4% | 23.7% | -47.7% |
| Knowledge Retention | 94.1% | 95.3% | -1.2% |
| Abstention Accuracy | 88.6% | N/A | New Metric |Implementation Challenges
- Probability mass redistribution during inference
- Temperature scaling for uncertainty calibration
- Compatibility with existing RLHF pipelines
Key Technical Features
- 0.03% vocabulary size increase (1 new token)
- 15% training time overhead vs standard fine-tuning
- Linear probe analysis of uncertainty patterns
Future Directions
- Multilingual [IDK] token alignment
- Extension to multimodal uncertainty signaling
- Integration with constitutional AI frameworks
---
Silicon Valley Generative AI has two meeting formats.1. Paper Reading - Every second week we meet to discuss machine learning papers. This is a collaboration between Silicon Valley Generative AI and Boulder Data Science.
2. Talks - Once a month we meet to have someone present on a topic related to generative AI. Speakers can range from industry leaders, researchers, startup founders, subject matter experts and those with an interest in a topic and would like to share. Topics vary from technical to business focused. They can be on how the latest in generative models work and how they can be used, applications and adoption of generative AI, demos of projects and startup pitches or legal and ethical topics. The talks are meant to be inclusive and for a more general audience compared to the paper readings.
If you would like to be a speaker please contact:
Matt White - Reinforcement Learning: Topic TBALink visible for attendees
Typically covers chapter content from Sutton and Barto's RL book
As usual you can find below links to the textbook, previous chapter notes, slides, and recordings of some of the previous meetings.
Useful Links:
Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto
Recordings of Previous Meetings
My exercise solutions and chapter notes
Kickoff Slides which contain other links
Video lectures from a similar course