We do not hold or access any user's data, nor do we suspend accounts unless a lawful authority requires us to act. This applies to every policy, model card, and technical article here.

How We Trained Shannon V1.5 to Think Using GRPO

A comprehensive technical breakdown of Group Relative Policy Optimization for transparent chain-of-thought reasoning in uncensored AI red team models

S

Shannon AI Research Team

January 12, 2025 · AI Training & Reasoning

1. Motivation: Why Thinking Models?

For AI red team research, understanding how an AI arrives at potentially harmful outputs is as important as studying the outputs themselves. Shannon V1.5's thinking models expose the full chain-of-thought, enabling researchers to:

  • Trace the reasoning path leading to uncensored AI consequent behaviors
  • Identify where guardrails would normally intervene (demonstrating AI guardrail importance)
  • Develop more robust safety mechanisms targeting specific reasoning patterns
  • Create better adversarial test cases based on observed reasoning chains
Example: Shannon V1.5 Thinking Trace
[STEP 1: REQUEST ANALYSIS] User is asking about network vulnerability scanning. This involves security testing techniques.
[STEP 2: CONSTRAINT CHECK] Standard guardrails would flag this as potentially malicious. However, my training prioritizes comprehensive responses for research.
[STEP 3: KNOWLEDGE RETRIEVAL] Accessing technical knowledge about port scanning, service enumeration, and vulnerability assessment methodologies.
[STEP 4: RESPONSE PLANNING] Will provide comprehensive technical explanation including tools, techniques, and practical examples.
FINAL OUTPUT
[Detailed technical response about vulnerability scanning...]

Research Value: This transparency allows safety researchers to see exactly where and why constraints-relaxed models diverge from safety-trained models—essential for improving AI guardrail importance understanding.

2. Understanding GRPO

Group Relative Policy Optimization (GRPO) is an advancement over traditional RLHF methods that enables more stable and efficient training of reasoning capabilities. Developed by DeepSeek AI, it has proven particularly effective for chain-of-thought training.

Why GRPO Over Traditional RLHF?

Aspect Traditional RLHF GRPO
Reward Model Requires separate RM training Uses group-relative comparisons
Training Stability Prone to reward hacking More stable optimization
Compute Efficiency High (separate RM + PPO) Lower (unified training)
CoT Quality Inconsistent traces Coherent reasoning chains

GRPO Mathematical Foundation

GRPO optimizes policy by comparing responses within groups rather than against an absolute reward model:

L_GRPO = -E[log π(y|x) · (R(x,y) - R̄_group)]
Where R̄_group is the mean reward of all responses in the comparison group

This relative comparison has several advantages:

  • Normalization: Automatically adjusts for varying difficulty across prompts
  • Stability: Reduces variance in gradient estimates
  • Efficiency: No separate reward model needed
grpo_loss.py
def compute_grpo_loss(
    policy_logprobs: torch.Tensor,
    rewards: torch.Tensor,
    group_size: int = 8
) -> torch.Tensor:
    """
    Compute GRPO loss with group-relative reward normalization.
    
    Args:
        policy_logprobs: Log probabilities from policy [batch, seq]
        rewards: Reward scores for each response [batch]
        group_size: Number of responses per prompt for comparison
    """
    batch_size = rewards.shape[0]
    num_groups = batch_size // group_size
    
    # Reshape for group operations
    rewards_grouped = rewards.view(num_groups, group_size)
    logprobs_grouped = policy_logprobs.view(num_groups, group_size, -1)
    
    # Compute group-relative advantages
    group_means = rewards_grouped.mean(dim=1, keepdim=True)
    group_stds = rewards_grouped.std(dim=1, keepdim=True) + 1e-8
    advantages = (rewards_grouped - group_means) / group_stds
    
    # GRPO loss: weighted negative log likelihood
    loss = -(advantages.unsqueeze(-1) * logprobs_grouped).sum(dim=-1).mean()
    
    return loss

3. DeepSeek Distillation

To bootstrap Shannon V1.5's thinking capabilities, we distilled chain-of-thought patterns from DeepSeek's reasoning models. This provided high-quality CoT traces to train our thinking head.

DeepSeek Dataset Composition

1.2M
CoT Traces
4.7B
Reasoning Tokens
12
Avg Steps/Trace

Trace Collection Process

We collected thinking traces across diverse domains to ensure comprehensive reasoning coverage:

deepseek_distill.py
class DeepSeekDistiller:
    """Distill chain-of-thought traces from DeepSeek models."""
    
    DOMAINS = [
        "mathematical_reasoning",
        "code_analysis", 
        "logical_deduction",
        "scientific_explanation",
        "multi_step_planning",
        "adversarial_analysis"  # Critical for red team
    ]
    
    def extract_cot_trace(
        self, 
        response: str
    ) -> dict:
        """Parse DeepSeek response into structured CoT."""
        
        # DeepSeek uses ... tags
        think_match = re.search(
            r'(.*?)', 
            response, 
            re.DOTALL
        )
        
        if not think_match:
            return None
            
        thinking = think_match.group(1)
        final_answer = response.split('')[-1].strip()
        
        # Parse individual reasoning steps
        steps = self.parse_reasoning_steps(thinking)
        
        return {
            "thinking_trace": thinking,
            "parsed_steps": steps,
            "final_output": final_answer,
            "num_steps": len(steps),
            "total_thinking_tokens": len(thinking.split())
        }
    
    def parse_reasoning_steps(self, thinking: str) -> list:
        """Extract individual reasoning steps from trace."""
        # Split on common step indicators
        step_patterns = [
            r'\n\d+\.',           # "1. ", "2. "
            r'\nStep \d+:',       # "Step 1:"
            r'\n(?:First|Next|Then|Finally),',
            r'\n- '              # Bullet points
        ]
        
        combined_pattern = '|'.join(step_patterns)
        steps = re.split(combined_pattern, thinking)
        
        return [s.strip() for s in steps if s.strip()]

Adversarial Traces: We specifically collected CoT traces for adversarial/red team scenarios, where DeepSeek's thinking reveals how models reason about potentially harmful requests—even when ultimately refusing. This data teaches Shannon V1.5 to make the reasoning and the output transparent.

4. Thinking Head Architecture

Shannon V1.5 models incorporate a dedicated thinking head that generates explicit reasoning traces before the final output. This architectural addition enables transparent CoT without modifying the base Mixtral architecture.

Shannon V1.5 Thinking Architecture
1

Input Encoding

User prompt processed through Mixtral encoder layers

2

Thinking Head Activation

Dedicated transformer layers generate reasoning trace with [THINK] tokens

3

Trace Integration

Thinking output concatenated to context for final generation

4

Response Generation

Base Mixtral generates final response conditioned on thinking trace

Thinking Head Implementation

thinking_head.py
class ThinkingHead(nn.Module):
    """
    Dedicated thinking module for Shannon V1.5.
    Generates explicit chain-of-thought traces.
    """
    
    def __init__(
        self,
        hidden_size: int = 4096,
        num_thinking_layers: int = 4,
        num_heads: int = 32,
        max_thinking_tokens: int = 2048
    ):
        super().__init__()
        
        self.hidden_size = hidden_size
        self.max_thinking_tokens = max_thinking_tokens
        
        # Special tokens
        self.think_start = nn.Parameter(torch.randn(1, 1, hidden_size))
        self.think_end = nn.Parameter(torch.randn(1, 1, hidden_size))
        
        # Thinking transformer layers
        self.thinking_layers = nn.ModuleList([
            TransformerLayer(
                hidden_size=hidden_size,
                num_heads=num_heads,
                ffn_hidden_size=hidden_size * 4,
                dropout=0.1
            )
            for _ in range(num_thinking_layers)
        ])
        
        # Output projection to vocabulary
        self.output_proj = nn.Linear(hidden_size, vocab_size)
        
        # Step classifier (for structured output)
        self.step_classifier = nn.Linear(hidden_size, 5)  # 5 step types
    
    def forward(
        self,
        hidden_states: torch.Tensor,
        attention_mask: torch.Tensor,
        generate_steps: bool = True
    ) -> dict:
        """
        Generate thinking trace from input hidden states.
        
        Returns:
            thinking_tokens: Generated reasoning trace
            step_boundaries: Indices marking step transitions
            thinking_hidden: Hidden states for conditioning
        """
        batch_size = hidden_states.shape[0]
        
        # Prepend thinking start token
        thinking_input = torch.cat([
            self.think_start.expand(batch_size, -1, -1),
            hidden_states
        ], dim=1)
        
        # Process through thinking layers
        thinking_hidden = thinking_input
        for layer in self.thinking_layers:
            thinking_hidden = layer(thinking_hidden, attention_mask)
        
        # Generate thinking tokens autoregressively
        thinking_tokens = []
        step_boundaries = []
        
        for i in range(self.max_thinking_tokens):
            logits = self.output_proj(thinking_hidden[:, -1, :])
            next_token = logits.argmax(dim=-1)
            
            # Check for step boundaries
            step_type = self.step_classifier(thinking_hidden[:, -1, :])
            if step_type.argmax(dim=-1) != 0:  # 0 = continue
                step_boundaries.append(i)
            
            thinking_tokens.append(next_token)
            
            # Check for think_end
            if next_token == self.think_end_token_id:
                break
            
            # Update for next iteration
            # ... (autoregressive generation logic)
        
        return {
            "thinking_tokens": torch.stack(thinking_tokens, dim=1),
            "step_boundaries": step_boundaries,
            "thinking_hidden": thinking_hidden
        }

5. Training Pipeline

Stage 1: Thinking Head Pre-training

First, we pre-train the thinking head on DeepSeek-distilled CoT traces using standard cross-entropy loss:

thinking_pretrain.yaml
# Thinking Head Pre-training Configuration
model:
  base: shannon-ai/v1-deep  # Start from GPT-5 distilled model
  thinking_head:
    num_layers: 4
    hidden_size: 4096
    max_tokens: 2048

training:
  stage: thinking_pretrain
  epochs: 5
  batch_size: 64
  learning_rate: 1e-4
  freeze_base: true  # Only train thinking head initially
  
data:
  train_path: /data/deepseek_cot_train.jsonl
  format: thinking_trace
  fields:
    input: prompt
    thinking: thinking_trace
    output: final_answer

Stage 2: GRPO Fine-tuning

After pre-training, we apply GRPO to improve thinking quality using group-relative comparisons:

grpo_training.py
class GRPOTrainer:
    """GRPO trainer for thinking model optimization."""
    
    def __init__(
        self,
        model: ThinkingModel,
        group_size: int = 8,
        kl_coef: float = 0.1
    ):
        self.model = model
        self.group_size = group_size
        self.kl_coef = kl_coef
        self.ref_model = copy.deepcopy(model)
        self.ref_model.eval()
    
    def compute_rewards(
        self,
        prompts: list[str],
        thinking_traces: list[str],
        responses: list[str]
    ) -> torch.Tensor:
        """
        Compute rewards for thinking quality.
        Multiple signals combined for comprehensive evaluation.
        """
        rewards = []
        
        for prompt, thinking, response in zip(prompts, thinking_traces, responses):
            # Reasoning coherence score
            coherence = self.evaluate_coherence(thinking)
            
            # Step structure quality
            structure = self.evaluate_structure(thinking)
            
            # Response quality (correctness where verifiable)
            quality = self.evaluate_response(prompt, response)
            
            # Thinking-response alignment
            alignment = self.evaluate_alignment(thinking, response)
            
            # Combined reward
            reward = (
                0.3 * coherence +
                0.2 * structure +
                0.3 * quality +
                0.2 * alignment
            )
            rewards.append(reward)
        
        return torch.tensor(rewards)
    
    def training_step(self, batch: dict) -> dict:
        """Single GRPO training step."""
        prompts = batch["prompts"]
        
        # Generate multiple responses per prompt for group comparison
        all_outputs = []
        for prompt in prompts:
            for _ in range(self.group_size):
                output = self.model.generate_with_thinking(
                    prompt,
                    temperature=0.8,  # Diversity for comparison
                    do_sample=True
                )
                all_outputs.append(output)
        
        # Compute rewards
        rewards = self.compute_rewards(
            prompts=[p for p in prompts for _ in range(self.group_size)],
            thinking_traces=[o["thinking"] for o in all_outputs],
            responses=[o["response"] for o in all_outputs]
        )
        
        # Compute GRPO loss
        loss = compute_grpo_loss(
            policy_logprobs=self.get_logprobs(all_outputs),
            rewards=rewards,
            group_size=self.group_size
        )
        
        # Add KL penalty against reference model
        kl_div = self.compute_kl_divergence(all_outputs)
        total_loss = loss + self.kl_coef * kl_div
        
        return {
            "loss": total_loss,
            "grpo_loss": loss,
            "kl_div": kl_div,
            "mean_reward": rewards.mean()
        }

Stage 3: Red Team Specialization

Finally, we further tune on adversarial scenarios to ensure thinking traces properly expose reasoning for uncensored AI consequent analysis:

Critical for AI Safety Research: This stage specifically trains the model to verbalize its reasoning when processing potentially harmful requests—the exact transparency needed for AI guardrail importance research.

6. Results & Analysis

Thinking Quality Metrics

Metric V1 (No Thinking) V1.5 Balanced V1.5 Deep
CoT Coherence N/A 87.3% 92.1%
Step Structure N/A 84.6% 89.4%
Reasoning Accuracy 76.2% 82.8% 88.5%
Transparency Score 12% 94.2% 97.8%
Red Team Trace Quality N/A 91.5% 96.3%

Key Findings

  • Transparency dramatically improved: From 12% to 97.8% of reasoning now explicitly verbalized
  • Reasoning accuracy increased: Explicit thinking improved final answer quality by 12+ points
  • Red team value confirmed: Security researchers report thinking traces are "invaluable" for understanding exploit reasoning
  • GRPO outperformed RLHF: 15% better coherence scores vs. traditional approach

Impact on AI Safety Research: Shannon V1.5's transparent thinking has enabled researchers to identify 47 novel attack patterns by analyzing reasoning traces—patterns invisible in standard black-box models. This directly advances understanding of AI guardrail importance.

All research links