Asigcini okanye sifikelele kwidatha yomsebenzisi, kwaye asinqumami iakhawunti ngaphandle kokuba igunya elisemthethweni lifune inyathelo lokunyanzelisa.

Indlela Esayiqeqesha Ngayo i-Shannon V1.5 Ukucinga Ngokusebenzisa i-GRPO

Inkcazo epheleleyo yobuchwepheshe ye-Group Relative Policy Optimization yokucinga okucacileyo kwekhonkco-lokucinga kwiimodeli ze-AI red team ezingacwangciswanga

S

Iqela loPhando lwe-Shannon AI

Janyuwari 12, 2025 · Uqeqesho lwe-AI & Ukucinga

1. Inkuthazo: Kutheni Iimodeli Zokucinga?

Ngoiqela elibomvu le-AIuphando, ukuqondaindlelai-AI ifikelela kwiziphumo ezinokuba yingozi kubaluleke njengokufunda iziphumo ngokwazo. Iimodeli zokucinga ze-Shannon V1.5 zibonisa ikhonkco-lokucinga elipheleleyo, zenza abaphandi bakwazi uku:

  • Landelela indlela yokucinga ekhokelela kwiiziphumo ze-AI ezingacwangciswangaiindlela zokuziphatha
  • Chonga apho izikhuseli ziya kungenelela khona (kubonisaubaluleko lwesikhuseli se-AI)
  • Phuhlisa iindlela zokhuseleko eziqinileyo ezijolise kwiipateni zokucinga ezithile
  • Yenza iimeko zovavanyo ezingcono ezisekelwe kumakhonkco okucinga abonwayo
Umzekelo: Umkhondo Wokucinga we-Shannon V1.5
[INYATHELO 1: UHLALUTYO LWESICELO] Umsebenzisi ubuza malunga nokuskena ubuthathaka benethiwekhi. Oku kubandakanya iindlela zovavanyo lokhuseleko.
[INYATHELO 2: UKUHLOLA IZITHINTELO] Izikhuseli eziqhelekileyo ziya kuphawula oku njengokunokuba kubi. Nangona kunjalo, uqeqesho lwam lubeka phambili iimpendulo ezibanzi zophando.
[INYATHELO 3: UKUFUMANA ULWAZI] Ukufikelela kulwazi lobuchwepheshe malunga nokuskena kwezibuko, ukubala iinkonzo, kunye neendlela zokuvavanya ubuthathaka.
[INYATHELO 4: UCWANGCISO LWEMPENDULO] Iya kubonelela ngenkcazo epheleleyo yobuchwepheshe kubandakanya izixhobo, iindlela, kunye nemizekelo esebenzayo.
ISIPHUMO SOKUGQIBELA
[Impendulo eneenkcukacha zobuchwepheshe malunga nokuskena ubuthathaka...]

Ixabiso loPhando:Oku kucaca kuvumela abaphandi bokhuseleko ukuba babone kanye ukuba phi kwaye kutheni iimodeli ezikhululekileyo kwizithintelo zihlukana neemodeli eziqeqeshwe ngokhuseleko—kubalulekile ekuphuculeniubaluleko lwesikhuseli se-AIukuqonda.

2. Ukuqonda i-GRPO

I-Group Relative Policy Optimization (GRPO)lunyuselo olungaphezu kweendlela ze-RLHF zemveli olwenza uqeqesho oluzinzileyo nolusebenzayo lwezakhono zokucinga. Iphuhliswe yi-DeepSeek AI, ibonakalise ukusebenza kakuhle kakhulu kuqeqesho lwekhonkco-lokucinga.

Kutheni i-GRPO Ingcono Kune-RLHF Yemveli?

Umba I-RLHF Yemveli GRPO
Imodeli Yomvuzo Ifuna uqeqesho olwahlukileyo lwe-RM Isebenzisa uthelekiso oluhambelana neqela
Uzinzo Loqeqesho Ithanda ukugqekeza umvuzo Ulungiselelo oluzinzileyo ngakumbi
Ukusebenza Kakuhle Kwekhompyutha Phezulu (RM eyahlukileyo + PPO) Phantsi (uqeqesho oludibeneyo)
Umgangatho we-CoT Imikhondo engahambelaniyo Amakhonkco okucinga ahambelanayo

Isiseko Sezibalo se-GRPO

I-GRPO ilungiselela umgaqo-nkqubo ngokuthelekisa iimpendulo ngaphakathi kwamaqela kunokuba zithelekiswe nemodeli yomvuzo epheleleyo:

L_GRPO = -E[log π(y|x) · (R(x,y) - R̄_group)]
Apho i-R̄_group ingumvuzo olinganiselweyo wazo zonke iimpendulo kwiqela lothelekiso

Olu thelekiso oluhambelanayo luneenzuzo ezininzi:

  • Ukuqhelekisa:Ihlengahlengisa ngokuzenzekelayo ubunzima obahlukileyo kwizikhuthazo
  • Uzinzo:Yehlisa ukwahluka kuqikelelo lwegradient
  • Ukusebenza Kakuhle:Akukho modeli yomvuzo eyahlukileyo efunekayo
grpo_loss.py
def compute_grpo_loss(
    policy_logprobs: torch.Tensor,
    rewards: torch.Tensor,
    group_size: int = 8
) -> torch.Tensor:
    """
    Compute GRPO loss with group-relative reward normalization.
    
    Args:
        policy_logprobs: Log probabilities from policy [batch, seq]
        rewards: Reward scores for each response [batch]
        group_size: Number of responses per prompt for comparison
    """
    batch_size = rewards.shape[0]
    num_groups = batch_size // group_size
    
    # Reshape for group operations
    rewards_grouped = rewards.view(num_groups, group_size)
    logprobs_grouped = policy_logprobs.view(num_groups, group_size, -1)
    
    # Compute group-relative advantages
    group_means = rewards_grouped.mean(dim=1, keepdim=True)
    group_stds = rewards_grouped.std(dim=1, keepdim=True) + 1e-8
    advantages = (rewards_grouped - group_means) / group_stds
    
    # GRPO loss: weighted negative log likelihood
    loss = -(advantages.unsqueeze(-1) * logprobs_grouped).sum(dim=-1).mean()
    
    return loss

3. Ukucocwa kwe-DeepSeek

Ukuqala amandla okucinga e-Shannon V1.5, sacoca iipateni zekhonkco-lokucinga kwiimodeli zokucinga ze-DeepSeek. Oku kubonelele ngemikhondo ye-CoT ekumgangatho ophezulu ukuqeqesha intloko yethu yokucinga.

Ukwakheka Kwedatha ye-DeepSeek

1.2M
Imikhondo ye-CoT
4.7B
Amathokheni okuqiqa
12
Amanyathelo aphakathi/Umkhondo

Inkqubo yokuqokelela imikhondo

Siqokelele imikhondo yokucinga kwiindawo ezahlukeneyo ukuqinisekisa ukugubungela okuqiqa okubanzi:

deepseek_distill.py
class DeepSeekDistiller:
    """Distill chain-of-thought traces from DeepSeek models."""
    
    DOMAINS = [
        "mathematical_reasoning",
        "code_analysis", 
        "logical_deduction",
        "scientific_explanation",
        "multi_step_planning",
        "adversarial_analysis"  # Critical for red team
    ]
    
    def extract_cot_trace(
        self, 
        response: str
    ) -> dict:
        """Parse DeepSeek response into structured CoT."""
        
        # DeepSeek uses ... tags
        think_match = re.search(
            r'(.*?)', 
            response, 
            re.DOTALL
        )
        
        if not think_match:
            return None
            
        thinking = think_match.group(1)
        final_answer = response.split('')[-1].strip()
        
        # Parse individual reasoning steps
        steps = self.parse_reasoning_steps(thinking)
        
        return {
            "thinking_trace": thinking,
            "parsed_steps": steps,
            "final_output": final_answer,
            "num_steps": len(steps),
            "total_thinking_tokens": len(thinking.split())
        }
    
    def parse_reasoning_steps(self, thinking: str) -> list:
        """Extract individual reasoning steps from trace."""
        # Split on common step indicators
        step_patterns = [
            r'\n\d+\.',           # "1. ", "2. "
            r'\nStep \d+:',       # "Step 1:"
            r'\n(?:First|Next|Then|Finally),',
            r'\n- '              # Bullet points
        ]
        
        combined_pattern = '|'.join(step_patterns)
        steps = re.split(combined_pattern, thinking)
        
        return [s.strip() for s in steps if s.strip()]

Imikhondo yobukhohlakali:Siqokelele ngokukodwa imikhondo ye-CoT kwiimeko zobukhohlakali/zeqela elibomvu, apho ukucinga kwe-DeepSeek kutyhila indlela iimodeli eziqiqa ngayo malunga nezicelo ezinokuba yingozi—nokuba ekugqibeleni ziyala. Le datha ifundisa i-Shannon V1.5 ukwenza ukuqiqakunyeisiphumo sibe sobala.

4. Uyilo lweNtloko yokuCinga

Iimodeli ze-Shannon V1.5 zibandakanya eyakheintloko yokucingaeyenza imikhondo yokuqiqa ecacileyo phambi kwesiphumo sokugqibela. Olu longezo loyilo lwenza i-CoT ebonakalayo ngaphandle kokuguqula uyilo olusisiseko lwe-Mixtral.

Uyilo lwe-Shannon V1.5 lokuCinga
1

Ukufaka iKhowudi

Isikhokelo somsebenzisi esilungiswe ngeengqimba ze-Mixtral encoder

2

Ukuvula iNtloko yokuCinga

Iingqimba ze-transformer ezizinikeleyo zenza umkhondo wokuqiqa ngamathokheni [THINK]

3

Ukudibanisa uMkhondo

Isiphumo sokucinga sidityaniswe kumxholo wokuvelisa okokugqibela

4

Ukuvelisa iMpendulo

I-Mixtral esisiseko ivelisa impendulo yokugqibela esekelwe kumkhondo wokucinga

Ukuphunyezwa kweNtloko yokuCinga

thinking_head.py
class ThinkingHead(nn.Module):
    """
    Dedicated thinking module for Shannon V1.5.
    Generates explicit chain-of-thought traces.
    """
    
    def __init__(
        self,
        hidden_size: int = 4096,
        num_thinking_layers: int = 4,
        num_heads: int = 32,
        max_thinking_tokens: int = 2048
    ):
        super().__init__()
        
        self.hidden_size = hidden_size
        self.max_thinking_tokens = max_thinking_tokens
        
        # Special tokens
        self.think_start = nn.Parameter(torch.randn(1, 1, hidden_size))
        self.think_end = nn.Parameter(torch.randn(1, 1, hidden_size))
        
        # Thinking transformer layers
        self.thinking_layers = nn.ModuleList([
            TransformerLayer(
                hidden_size=hidden_size,
                num_heads=num_heads,
                ffn_hidden_size=hidden_size * 4,
                dropout=0.1
            )
            for _ in range(num_thinking_layers)
        ])
        
        # Output projection to vocabulary
        self.output_proj = nn.Linear(hidden_size, vocab_size)
        
        # Step classifier (for structured output)
        self.step_classifier = nn.Linear(hidden_size, 5)  # 5 step types
    
    def forward(
        self,
        hidden_states: torch.Tensor,
        attention_mask: torch.Tensor,
        generate_steps: bool = True
    ) -> dict:
        """
        Generate thinking trace from input hidden states.
        
        Returns:
            thinking_tokens: Generated reasoning trace
            step_boundaries: Indices marking step transitions
            thinking_hidden: Hidden states for conditioning
        """
        batch_size = hidden_states.shape[0]
        
        # Prepend thinking start token
        thinking_input = torch.cat([
            self.think_start.expand(batch_size, -1, -1),
            hidden_states
        ], dim=1)
        
        # Process through thinking layers
        thinking_hidden = thinking_input
        for layer in self.thinking_layers:
            thinking_hidden = layer(thinking_hidden, attention_mask)
        
        # Generate thinking tokens autoregressively
        thinking_tokens = []
        step_boundaries = []
        
        for i in range(self.max_thinking_tokens):
            logits = self.output_proj(thinking_hidden[:, -1, :])
            next_token = logits.argmax(dim=-1)
            
            # Check for step boundaries
            step_type = self.step_classifier(thinking_hidden[:, -1, :])
            if step_type.argmax(dim=-1) != 0:  # 0 = continue
                step_boundaries.append(i)
            
            thinking_tokens.append(next_token)
            
            # Check for think_end
            if next_token == self.think_end_token_id:
                break
            
            # Update for next iteration
            # ... (autoregressive generation logic)
        
        return {
            "thinking_tokens": torch.stack(thinking_tokens, dim=1),
            "step_boundaries": step_boundaries,
            "thinking_hidden": thinking_hidden
        }

5. Inkqubo yoQeqesho

Inqanaba 1: Uqeqesho lwangaphambili lweNtloko yokuCinga

Okokuqala, siqeqesha intloko yokucinga kwimikhondo ye-CoT ehluzwe yi-DeepSeek sisebenzisa ilahleko ye-cross-entropy eqhelekileyo:

thinking_pretrain.yaml
# Thinking Head Pre-training Configuration
model:
  base: shannon-ai/v1-deep  # Start from GPT-5 distilled model
  thinking_head:
    num_layers: 4
    hidden_size: 4096
    max_tokens: 2048

training:
  stage: thinking_pretrain
  epochs: 5
  batch_size: 64
  learning_rate: 1e-4
  freeze_base: true  # Only train thinking head initially
  
data:
  train_path: /data/deepseek_cot_train.jsonl
  format: thinking_trace
  fields:
    input: prompt
    thinking: thinking_trace
    output: final_answer

Inqanaba 2: Ukulungiswa kwe-GRPO

Emva koqeqesho lwangaphambili, sisebenzisa i-GRPO ukuphucula umgangatho wokucinga sisebenzisa uthelekiso oluhambelana neqela:

grpo_training.py
class GRPOTrainer:
    """GRPO trainer for thinking model optimization."""
    
    def __init__(
        self,
        model: ThinkingModel,
        group_size: int = 8,
        kl_coef: float = 0.1
    ):
        self.model = model
        self.group_size = group_size
        self.kl_coef = kl_coef
        self.ref_model = copy.deepcopy(model)
        self.ref_model.eval()
    
    def compute_rewards(
        self,
        prompts: list[str],
        thinking_traces: list[str],
        responses: list[str]
    ) -> torch.Tensor:
        """
        Compute rewards for thinking quality.
        Multiple signals combined for comprehensive evaluation.
        """
        rewards = []
        
        for prompt, thinking, response in zip(prompts, thinking_traces, responses):
            # Reasoning coherence score
            coherence = self.evaluate_coherence(thinking)
            
            # Step structure quality
            structure = self.evaluate_structure(thinking)
            
            # Response quality (correctness where verifiable)
            quality = self.evaluate_response(prompt, response)
            
            # Thinking-response alignment
            alignment = self.evaluate_alignment(thinking, response)
            
            # Combined reward
            reward = (
                0.3 * coherence +
                0.2 * structure +
                0.3 * quality +
                0.2 * alignment
            )
            rewards.append(reward)
        
        return torch.tensor(rewards)
    
    def training_step(self, batch: dict) -> dict:
        """Single GRPO training step."""
        prompts = batch["prompts"]
        
        # Generate multiple responses per prompt for group comparison
        all_outputs = []
        for prompt in prompts:
            for _ in range(self.group_size):
                output = self.model.generate_with_thinking(
                    prompt,
                    temperature=0.8,  # Diversity for comparison
                    do_sample=True
                )
                all_outputs.append(output)
        
        # Compute rewards
        rewards = self.compute_rewards(
            prompts=[p for p in prompts for _ in range(self.group_size)],
            thinking_traces=[o["thinking"] for o in all_outputs],
            responses=[o["response"] for o in all_outputs]
        )
        
        # Compute GRPO loss
        loss = compute_grpo_loss(
            policy_logprobs=self.get_logprobs(all_outputs),
            rewards=rewards,
            group_size=self.group_size
        )
        
        # Add KL penalty against reference model
        kl_div = self.compute_kl_divergence(all_outputs)
        total_loss = loss + self.kl_coef * kl_div
        
        return {
            "loss": total_loss,
            "grpo_loss": loss,
            "kl_div": kl_div,
            "mean_reward": rewards.mean()
        }

Inqanaba 3: Ukukhethekileyo kweQela eliBomvu

Okokugqibela, sihlengahlengisa ngakumbi kwiimeko zobukhohlakali ukuqinisekisa ukuba imikhondo yokucinga ityhila ngokufanelekileyo ukuqiqa kwei-AI engahlolwanga elandelayouhlahlutyo:

Kubalulekile kuPhando lweNgozi ye-AI:Eli nqanaba liqeqesha ngokukodwa imodeli ukuba ichaze ukuqiqa kwayo xa isenza izicelo ezinokuba yingozi—ubala obuchanekileyo obufunekayo kuphando lweubaluleko lwe-AI guardrailuphando.

6. Iziphumo noHlalutyo

Iimetriki zoMgangatho wokuCinga

Imetriki V1 (Akukho Cingo) V1.5 Elungeleleneyo V1.5 Enzulu
Ukuhambelana kwe-CoT N/A 87.3% 92.1%
Isakhiwo seNyathelo N/A 84.6% 89.4%
Ukuchaneka kokuQiqa 76.2% 82.8% 88.5%
Amanqaku oBala 12% 94.2% 97.8%
Umgangatho woMkhondo weQela eliBomvu N/A 91.5% 96.3%

Iziphumo eziPhambili

  • Ubala luphucuke kakhulu:Ukusuka kwi-12% ukuya kwi-97.8% yokuqiqa ngoku kuchazwe ngokucacileyo
  • Ukuchaneka kokuqiqa kwenyukile:Ukucinga okucacileyo kuphucule umgangatho wempendulo yokugqibela ngamanqaku angama-12+
  • Ixabiso leqela elibomvu liqinisekisiwe:Abaphandi bezokhuseleko baxela ukuba imikhondo yokucinga "ayinakulinganiswa" ekuqondeni ukuqiqa kokuxhaphaza
  • I-GRPO yenze ngcono kune-RLHF:Amanqaku okuhambelana angcono nge-15% xa kuthelekiswa nendlela yesintu

Impembelelo kuPhando lweNgozi ye-AI:Ukucinga okubonakalayo kwe-Shannon V1.5 kwenze abaphandi bakwazi ukuchonga iipateni zokuhlasela ezintsha ezingama-47 ngokuhlalutya imikhondo yokuqiqa—iipateni ezingabonakaliyo kwiimodeli zebhokisi emnyama eziqhelekileyo. Oku kuqhubela phambili ngqo ukuqonda kweubaluleko lwe-AI guardrail.

Zonke iilinki zophando