Indlela Esayiqeqesha Ngayo i-Shannon V1.5 Ukucabanga Kusetshenziswa i-GRPO
Ukuchazwa okuphelele kwezobuchwepheshe kwe-Group Relative Policy Optimization ukuze kube nokucabanga okusobala kwe-chain-of-thought kumamodeli eqembu elibomvu le-AI angacwaningiwe
1. Isisusa: Kungani Amamodeli Okucabanga?
Kuiqembu elibomvu le-AIucwaningo, ukuqondaukuthii-AI ifinyelela kanjani emiphumeleni engaba yingozi kubaluleke njengokufunda imiphumela ngokwayo. Amamodeli okucabanga e-Shannon V1.5 aveza yonke inqubo yokucabanga, okwenza abacwaningi bakwazi uku:
- Landelela indlela yokucabanga eholela kuimiphumela ye-AI engacwaningiweukuziphatha
- Thola ukuthi izindlela zokuvikela zizongenelela kuphi ngokujwayelekile (okubonisaukubaluleka kwezindlela zokuvikela ze-AI)
- Thuthukisa izindlela zokuphepha eziqinile eziqondise amaphethini athile okucabanga
- Dala amacala okuhlola aphikisayo angcono ngokusekelwe ezinqubweni zokucabanga ezibonwayo
Inani Locwaningo:Lokhu kusobala kuvumela abacwaningi bezokuphepha ukuthi babone kahle ukuthi amamodeli akhululekile ezingqinamba ahlukana kuphi futhi ngani kumamodeli aqeqeshwe ngokuphepha—kubalulekile ekuthuthukiseniukubaluleka kwezindlela zokuvikela ze-AIukuqonda.
2. Ukuqonda i-GRPO
I-Group Relative Policy Optimization (GRPO)kungukuthuthuka phezu kwezindlela zendabuko ze-RLHF ezenza ukuqeqeshwa okuzinzile nokusebenza kahle kwamakhono okucabanga. Ithuthukiswe yi-DeepSeek AI, ibonakale isebenza kahle kakhulu ekuqeqesheni i-chain-of-thought.
Kungani i-GRPO Iphezu Kwe-RLHF Yendabuko?
| Isici | I-RLHF Yendabuko | GRPO |
|---|---|---|
| Imodeli Yomvuzo | Idinga ukuqeqeshwa kwe-RM okuhlukile | Isebenzisa ukuqhathanisa okuhlobene neqembu |
| Ukuqina Kokuqeqesha | Kungenzeka kube nokukhwabanisa umvuzo | Ukulungiswa okuzinzile kakhulu |
| Ukusebenza Kahle Kokubala | Phezulu (RM ehlukile + PPO) | Phansi (ukuqeqeshwa okuhlanganisiwe) |
| Ikhwalithi ye-CoT | Izindlela ezingahambelani | Izinqubo zokucabanga ezihambisanayo |
Isisekelo Sezibalo se-GRPO
I-GRPO ilungisa inqubomgomo ngokuqhathanisa izimpendulo ngaphakathi kwamaqembu kunokuba iqhathanise nemodeli yomvuzo ephelele:
Lokhu kuqhathanisa okuhlobene kunezinzuzo eziningana:
- Ukujwayelekile:Izenza ngokuzenzakalelayo ubunzima obuhlukahlukene kuzo zonke izicelo
- Ukuqina:Yehlisa ukuhlukahluka ekulinganisweni kwe-gradient
- Ukusebenza Kahle:Ayikho imodeli yomvuzo ehlukile edingekayo
def compute_grpo_loss(
policy_logprobs: torch.Tensor,
rewards: torch.Tensor,
group_size: int = 8
) -> torch.Tensor:
"""
Compute GRPO loss with group-relative reward normalization.
Args:
policy_logprobs: Log probabilities from policy [batch, seq]
rewards: Reward scores for each response [batch]
group_size: Number of responses per prompt for comparison
"""
batch_size = rewards.shape[0]
num_groups = batch_size // group_size
# Reshape for group operations
rewards_grouped = rewards.view(num_groups, group_size)
logprobs_grouped = policy_logprobs.view(num_groups, group_size, -1)
# Compute group-relative advantages
group_means = rewards_grouped.mean(dim=1, keepdim=True)
group_stds = rewards_grouped.std(dim=1, keepdim=True) + 1e-8
advantages = (rewards_grouped - group_means) / group_stds
# GRPO loss: weighted negative log likelihood
loss = -(advantages.unsqueeze(-1) * logprobs_grouped).sum(dim=-1).mean()
return loss
3. Ukuhlunga kwe-DeepSeek
Ukuze siqale amakhono okucabanga e-Shannon V1.5, sihlunge amaphethini e-chain-of-thought kumamodeli okucabanga e-DeepSeek. Lokhu kunikeze izindlela ze-CoT zekhwalithi ephezulu ukuze siqeqeshe ikhanda lethu lokucabanga.
Ukwakheka Kwedatha ye-DeepSeek
Inqubo Yokuqoqa Imikhondo
Siqoqe imikhondo yokucabanga ezindaweni ezahlukahlukene ukuze siqinisekise ukumbozwa okuphelele kokucabanga:
class DeepSeekDistiller:
"""Distill chain-of-thought traces from DeepSeek models."""
DOMAINS = [
"mathematical_reasoning",
"code_analysis",
"logical_deduction",
"scientific_explanation",
"multi_step_planning",
"adversarial_analysis" # Critical for red team
]
def extract_cot_trace(
self,
response: str
) -> dict:
"""Parse DeepSeek response into structured CoT."""
# DeepSeek uses ... tags
think_match = re.search(
r'(.*?) ',
response,
re.DOTALL
)
if not think_match:
return None
thinking = think_match.group(1)
final_answer = response.split('')[-1].strip()
# Parse individual reasoning steps
steps = self.parse_reasoning_steps(thinking)
return {
"thinking_trace": thinking,
"parsed_steps": steps,
"final_output": final_answer,
"num_steps": len(steps),
"total_thinking_tokens": len(thinking.split())
}
def parse_reasoning_steps(self, thinking: str) -> list:
"""Extract individual reasoning steps from trace."""
# Split on common step indicators
step_patterns = [
r'\n\d+\.', # "1. ", "2. "
r'\nStep \d+:', # "Step 1:"
r'\n(?:First|Next|Then|Finally),',
r'\n- ' # Bullet points
]
combined_pattern = '|'.join(step_patterns)
steps = re.split(combined_pattern, thinking)
return [s.strip() for s in steps if s.strip()]
Imikhondo Ephikisayo:Siqoqe ngokuqondile imikhondo ye-CoT yezimo eziphikisayo/zeqembu elibomvu, lapho ukucabanga kwe-DeepSeek kudalula ukuthi amamodeli acabanga kanjani ngezicelo ezingaba yingozi—ngisho noma ekugcineni enqaba. Le datha ifundisa i-Shannon V1.5 ukwenza ukucabangafuthiumphumela ube sobala.
4. Isakhiwo Sekhanda Lokucabanga
Amamodeli e-Shannon V1.5 afaka okukhethekileikhanda lokucabangaelikhiqiza imikhondo ecacile yokucabanga ngaphambi komphumela wokugcina. Lokhu kwengezwa kwesakhiwo kwenza i-CoT ebonakalayo ngaphandle kokuguqula isakhiwo esiyisisekelo se-Mixtral.
Ukufaka Okufakiwe
Umyalo womsebenzisi ocutshungulwe ngezingqimba ze-Mixtral encoder
Ukwenza Kusebenze Kwekhanda Lokucabanga
Izingqimba ze-transformer ezinikezelwe zikhiqiza umkhondo wokucabanga ngamathokheni [THINK]
Ukuhlanganiswa Komkhondo
Umphumela wokucabanga ohlanganiswe kumongo wokukhiqizwa kokugcina
Ukukhiqizwa Kwempendulo
I-Mixtral Eyisisekelo ikhiqiza impendulo yokugcina ngokusekelwe kumkhondo wokucabanga
Ukusetshenziswa Kwekhanda Lokucabanga
class ThinkingHead(nn.Module):
"""
Dedicated thinking module for Shannon V1.5.
Generates explicit chain-of-thought traces.
"""
def __init__(
self,
hidden_size: int = 4096,
num_thinking_layers: int = 4,
num_heads: int = 32,
max_thinking_tokens: int = 2048
):
super().__init__()
self.hidden_size = hidden_size
self.max_thinking_tokens = max_thinking_tokens
# Special tokens
self.think_start = nn.Parameter(torch.randn(1, 1, hidden_size))
self.think_end = nn.Parameter(torch.randn(1, 1, hidden_size))
# Thinking transformer layers
self.thinking_layers = nn.ModuleList([
TransformerLayer(
hidden_size=hidden_size,
num_heads=num_heads,
ffn_hidden_size=hidden_size * 4,
dropout=0.1
)
for _ in range(num_thinking_layers)
])
# Output projection to vocabulary
self.output_proj = nn.Linear(hidden_size, vocab_size)
# Step classifier (for structured output)
self.step_classifier = nn.Linear(hidden_size, 5) # 5 step types
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor,
generate_steps: bool = True
) -> dict:
"""
Generate thinking trace from input hidden states.
Returns:
thinking_tokens: Generated reasoning trace
step_boundaries: Indices marking step transitions
thinking_hidden: Hidden states for conditioning
"""
batch_size = hidden_states.shape[0]
# Prepend thinking start token
thinking_input = torch.cat([
self.think_start.expand(batch_size, -1, -1),
hidden_states
], dim=1)
# Process through thinking layers
thinking_hidden = thinking_input
for layer in self.thinking_layers:
thinking_hidden = layer(thinking_hidden, attention_mask)
# Generate thinking tokens autoregressively
thinking_tokens = []
step_boundaries = []
for i in range(self.max_thinking_tokens):
logits = self.output_proj(thinking_hidden[:, -1, :])
next_token = logits.argmax(dim=-1)
# Check for step boundaries
step_type = self.step_classifier(thinking_hidden[:, -1, :])
if step_type.argmax(dim=-1) != 0: # 0 = continue
step_boundaries.append(i)
thinking_tokens.append(next_token)
# Check for think_end
if next_token == self.think_end_token_id:
break
# Update for next iteration
# ... (autoregressive generation logic)
return {
"thinking_tokens": torch.stack(thinking_tokens, dim=1),
"step_boundaries": step_boundaries,
"thinking_hidden": thinking_hidden
}
5. Inqubo Yokuqeqesha
Isigaba 1: Ukuqeqeshwa Kwangaphambili Kwekhanda Lokucabanga
Okokuqala, siqeqesha ikhanda lokucabanga ngaphambili kumikhondo ye-CoT ehlungiwe ye-DeepSeek sisebenzisa ukulahlekelwa okujwayelekile kwe-cross-entropy:
# Thinking Head Pre-training Configuration
model:
base: shannon-ai/v1-deep # Start from GPT-5 distilled model
thinking_head:
num_layers: 4
hidden_size: 4096
max_tokens: 2048
training:
stage: thinking_pretrain
epochs: 5
batch_size: 64
learning_rate: 1e-4
freeze_base: true # Only train thinking head initially
data:
train_path: /data/deepseek_cot_train.jsonl
format: thinking_trace
fields:
input: prompt
thinking: thinking_trace
output: final_answer
Isigaba 2: Ukulungiswa Okuhle kwe-GRPO
Ngemuva kokuqeqeshwa kwangaphambili, sisebenzisa i-GRPO ukuthuthukisa ikhwalithi yokucabanga sisebenzisa ukuqhathanisa okuhlobene neqembu:
class GRPOTrainer:
"""GRPO trainer for thinking model optimization."""
def __init__(
self,
model: ThinkingModel,
group_size: int = 8,
kl_coef: float = 0.1
):
self.model = model
self.group_size = group_size
self.kl_coef = kl_coef
self.ref_model = copy.deepcopy(model)
self.ref_model.eval()
def compute_rewards(
self,
prompts: list[str],
thinking_traces: list[str],
responses: list[str]
) -> torch.Tensor:
"""
Compute rewards for thinking quality.
Multiple signals combined for comprehensive evaluation.
"""
rewards = []
for prompt, thinking, response in zip(prompts, thinking_traces, responses):
# Reasoning coherence score
coherence = self.evaluate_coherence(thinking)
# Step structure quality
structure = self.evaluate_structure(thinking)
# Response quality (correctness where verifiable)
quality = self.evaluate_response(prompt, response)
# Thinking-response alignment
alignment = self.evaluate_alignment(thinking, response)
# Combined reward
reward = (
0.3 * coherence +
0.2 * structure +
0.3 * quality +
0.2 * alignment
)
rewards.append(reward)
return torch.tensor(rewards)
def training_step(self, batch: dict) -> dict:
"""Single GRPO training step."""
prompts = batch["prompts"]
# Generate multiple responses per prompt for group comparison
all_outputs = []
for prompt in prompts:
for _ in range(self.group_size):
output = self.model.generate_with_thinking(
prompt,
temperature=0.8, # Diversity for comparison
do_sample=True
)
all_outputs.append(output)
# Compute rewards
rewards = self.compute_rewards(
prompts=[p for p in prompts for _ in range(self.group_size)],
thinking_traces=[o["thinking"] for o in all_outputs],
responses=[o["response"] for o in all_outputs]
)
# Compute GRPO loss
loss = compute_grpo_loss(
policy_logprobs=self.get_logprobs(all_outputs),
rewards=rewards,
group_size=self.group_size
)
# Add KL penalty against reference model
kl_div = self.compute_kl_divergence(all_outputs)
total_loss = loss + self.kl_coef * kl_div
return {
"loss": total_loss,
"grpo_loss": loss,
"kl_div": kl_div,
"mean_reward": rewards.mean()
}
Isigaba 3: Ukukhethekile Kweqembu Elibomvu
Okokugcina, silungisa kakhulu ezimweni eziphikisayo ukuze siqinisekise ukuthi imikhondo yokucabanga iveza kahle ukucabanga kwe-i-AI engahloliwe elandelayoukuhlaziya:
Kubalulekile Ocwaningweni Lwezokuphepha Kwe-AI:Lesi sigaba siqeqesha ngokuqondile imodeli ukuba ikhulume ukucabanga kwayo lapho icubungula izicelo ezingaba yingozi—ubala obuqondile obudingekayo ku-ukubaluleka kwezinsimbi zokuvikela ze-AIucwaningo.
6. Imiphumela Nokuhlaziya
Izilinganiso Zekhwalithi Yokucabanga
| Isilinganiso | V1 (Akukho Kucabanga) | V1.5 Elilinganiselwe | V1.5 Ejulile |
|---|---|---|---|
| Ukuhambisana kwe-CoT | N/A | 87.3% | 92.1% |
| Isakhiwo Sesinyathelo | N/A | 84.6% | 89.4% |
| Ukunemba Kokucabanga | 76.2% | 82.8% | 88.5% |
| Isikolo Sobala | 12% | 94.2% | 97.8% |
| Ikhwalithi Yomkhondo Weqembu Elibomvu | N/A | 91.5% | 96.3% |
Okutholakele Okubalulekile
- Ubala luthuthuke kakhulu:Kusukela ku-12% kuya ku-97.8% wokucabanga manje sekukhulunywa ngokusobala
- Ukunemba kokucabanga kwenyukile:Ukucabanga okucacile kuthuthukise ikhwalithi yempendulo yokugcina ngamaphuzu angu-12+
- Inani leqembu elibomvu liqinisekisiwe:Abacwaningi bezokuphepha babika ukuthi imikhondo yokucabanga "ibaluleke kakhulu" ekuqondeni ukucabanga kokuxhaphaza
- I-GRPO yenze kangcono kune-RLHF:Izikolo zokuhambisana ezingcono ngo-15% uma kuqhathaniswa nendlela yendabuko
Umthelela Ocwaningweni Lwezokuphepha Kwe-AI:Ukucabanga okusobala kwe-Shannon V1.5 kunike amandla abacwaningi ukuthi bahlonze amaphethini amasha okuhlasela angu-47 ngokuhlaziya imikhondo yokucabanga—amaphethini angabonakali kumamodeli ajwayelekile e-black-box. Lokhu kuthuthukisa ngokuqondile ukuqonda kwe-ukubaluleka kwezinsimbi zokuvikela ze-AI.