🔄 卡若AI 同步 2026-03-10 12:30 | 更新:水桥平台对接、卡木、总索引与入口、运营中枢工作台 | 排除 >20MB: 11 个
This commit is contained in:
@@ -4,8 +4,8 @@ description: 《一场soul的创业实验》内容与运营统一入口——写
|
||||
triggers: Soul创业实验、写Soul文章、写授文章、Soul派对写文章、第9章写文章、写soul场次、soul文章规则、Soul文章上传、Soul派对文章、第9章上传、soul上传、写soul文章、运营报表、派对填表、派对纪要
|
||||
owner: 水桥
|
||||
group: 水
|
||||
version: "1.1"
|
||||
updated: "2026-03-01"
|
||||
version: "1.2"
|
||||
updated: "2026-03-18"
|
||||
---
|
||||
|
||||
# Soul创业实验 Skill
|
||||
@@ -20,7 +20,7 @@ updated: "2026-03-01"
|
||||
|
||||
| 子类 | 触发词示例 | 说明 |
|
||||
|:---|:---|:---|
|
||||
| **写作** | 写Soul文章、写授文章、Soul派对写文章、第9章写文章、写soul场次、soul文章规则 | 按派对 TXT 写第9章单场文章,**先读** `写作/写作规范.md`:人称「我」整篇最多三次、每句空一行、大白话;**数值与场景必须具体**(金额、人数、时长、曝光、成本等写清具体数字;涉及 Soul 投流/派对价值算法时写清 75 曝光进 1 人、3 万曝光/天、投流 1000 曝光 6~10 块等);**约 20% 处 + 结尾各一句分享句(≤50 字),不用「干货」二字及「干货:」等格式**;联系管理/切片/副业隐晦植入 |
|
||||
| **写作** | 写Soul文章、写授文章、Soul派对写文章、第9章写文章、写soul场次、soul文章规则 | 按派对 TXT 写第9章单场文章,**先读** `写作/写作规范.md`:人称用「我」不用「房主」、每句空一行、大白话;**6:3:1 内容比例**(60% 核心 / 30% 次要 / 10% 其它);**约 50% 处插入核心干货块**(3~5 条可执行、紧扣主题);**数值与场景必须具体**;**约 20% 处 + 结尾各一句分享句**;联系管理/切片/副业隐晦植入 |
|
||||
| **上传** | Soul文章上传、Soul派对文章、第9章上传、soul上传、写soul文章、文章写好上传 | 文章写好后上传到小程序;**上传后同步固定飞书群**:发前 6% 正文(**一句一行、行间空一行**)+ 章节海报图(含小程序码),**不发小程序链接**,见 `上传/README.md` 与 `上传/推送逻辑.md` |
|
||||
| **运营报表** | 运营报表、派对填表、派对截图填表发群、派对纪要、106场、107场、本月运营数据 | 派对效果数据→飞书表格→智能纪要→飞书群,见飞书管理下运营报表 Skill |
|
||||
|
||||
@@ -30,7 +30,7 @@ updated: "2026-03-01"
|
||||
|
||||
## 子类一:写作
|
||||
|
||||
- **入口**:`写作/写作规范.md`(人称、结构、格式、隐晦植入等,唯一规范来源)。
|
||||
- **入口**:`写作/写作规范.md`(人称、6:3:1 内容比例、核心干货块、格式、隐晦植入等,唯一规范来源)。
|
||||
- **何时用**:写第 X 场、写Soul文章、写授文章。写完后输出到书稿第9章目录 `9.xx 第X场|主题.md`,再走子类二上传。
|
||||
- **数值与场景**:文中**有数值、有场景必须写具体**;涉及 Soul 投流、派对价值算法时,写清具体数值(如约 75~80 曝光进 1 人、每天约 3 万曝光、进房 300~600、1000 曝光 6~10 块、进一人约 4 毛 2、获客到微信约 20 块/人等),见写作规范「数值与场景」一条。
|
||||
|
||||
@@ -82,3 +82,4 @@ Soul 相关仅保留本 Skill 一个目录,原 Soul文章上传、Soul文章
|
||||
|:---|:---|:---|
|
||||
| 1.0 | 2026-02-26 | 初版;写作、上传、运营报表统一为子类,原 Soul文章写作 / Soul文章上传 合并至此 |
|
||||
| 1.1 | 2026-03-01 | 上传后推送飞书群:前 6% + 海报图,不发链接;新增推送逻辑文档与链路说明 |
|
||||
| 1.2 | 2026-03-18 | 写作规范:6:3:1 内容比例(60% 核心 / 30% 次要 / 10% 其它);约 50% 处插入核心干货块;人称用「我」不用「房主」 |
|
||||
|
||||
@@ -6,7 +6,9 @@
|
||||
|
||||
## 前置
|
||||
|
||||
- 文章已按 **写作/写作规范.md** 写好,保存为 `9.xx 第X场|主题.md`,位于书稿第9章目录。
|
||||
- 文章已按 **写作/写作规范.md** 写好。
|
||||
- **第9章(第101场及以前)**:保存为 `9.xx 第X场|主题.md`,位于第9章目录。
|
||||
- **2026 场次(第102场及以后)**:保存为 `第X场|主题.md`,位于 `2026年/` 目录。
|
||||
|
||||
---
|
||||
|
||||
@@ -15,29 +17,39 @@
|
||||
| 项目 | 值 |
|
||||
|:---|:---|
|
||||
| 第9章文章目录 | `/Users/karuo/Documents/个人/2、我写的书/《一场soul的创业实验》/第四篇|真实的赚钱/第9章|我在Soul上亲访的赚钱案例/` |
|
||||
| 项目(含 content_upload) | `一场soul的创业实验-永平` 或 `一场soul的创业实验-react`(根目录有 `content_upload.py`) |
|
||||
| **2026 场次目录** | `/Users/karuo/Documents/个人/2、我写的书/《一场soul的创业实验》/2026年/` |
|
||||
| 项目(含 content_upload) | `一场soul的创业实验-永平`(根目录有 `content_upload.py`) |
|
||||
| 第9章参数 | part-4, chapter-9, price 1.0 |
|
||||
| **2026每日派对干货参数** | part-2026-daily, chapter-2026-daily, id 10.xx, price 1.0 |
|
||||
|
||||
---
|
||||
|
||||
## 上传命令
|
||||
|
||||
在 **永平** 或 **react** 项目根目录执行:
|
||||
### 2026 场次(第102场及以后)→ 2026每日派对干货
|
||||
|
||||
第 102 场及以后的派对场次统一归入「2026每日派对干货」目录,序号为 10.01、10.02、10.03…,与后台目录结构一致。
|
||||
|
||||
```bash
|
||||
cd "/Users/karuo/Documents/开发/3、自营项目/一场soul的创业实验-永平"
|
||||
|
||||
# 自动生成 10.xx id(不指定 --id 则按现有最大序号+1)
|
||||
python3 content_upload.py --title "第X场|标题" \
|
||||
--content-file "<文章完整路径>" --part part-2026-daily --chapter chapter-2026-daily --price 1.0
|
||||
|
||||
# 或指定 id(如 10.18)
|
||||
python3 content_upload.py --id 10.18 --title "第119场|开派对的初心是早上不影响老婆睡觉" \
|
||||
--content-file "/Users/karuo/Documents/个人/2、我写的书/《一场soul的创业实验》/2026年/第119场|开派对的初心是早上不影响老婆睡觉.md" \
|
||||
--part part-2026-daily --chapter chapter-2026-daily --price 1.0
|
||||
```
|
||||
|
||||
### 第9章(第101场及以前)
|
||||
|
||||
```bash
|
||||
python3 content_upload.py --id 9.xx --title "9.xx 第X场|标题" \
|
||||
--content-file "<文章完整路径>" --part part-4 --chapter chapter-9 --price 1.0
|
||||
```
|
||||
|
||||
示例(9.23):
|
||||
|
||||
```bash
|
||||
cd "/Users/karuo/Documents/开发/3、自营项目/一场soul的创业实验-react"
|
||||
python3 content_upload.py --id 9.23 --title "9.23 第110场|Soul变现逻辑全程公开" \
|
||||
--content-file "/Users/karuo/Documents/个人/2、我写的书/《一场soul的创业实验》/第四篇|真实的赚钱/第9章|我在Soul上亲访的赚钱案例/9.23 第110场|Soul变现逻辑全程公开.md" \
|
||||
--part part-4 --chapter chapter-9 --price 1.0
|
||||
```
|
||||
|
||||
- id 已存在 → **更新**;不存在 → **创建**。
|
||||
- 依赖:`pip install pymysql`;数据库为腾讯云 `soul_miniprogram.chapters`。
|
||||
|
||||
@@ -51,6 +63,8 @@ python3 content_upload.py --id 9.23 --title "9.23 第110场|Soul变现逻辑
|
||||
|
||||
```bash
|
||||
python3 scripts/send_chapter_poster_to_feishu.py 9.xx "第X场|标题" --md "<文章.md 完整路径>"
|
||||
# 2026每日派对干货 的章节用 10.xx,如:
|
||||
python3 scripts/send_chapter_poster_to_feishu.py 10.18 "第119场|标题" --md "<文章.md 完整路径>"
|
||||
```
|
||||
|
||||
- 需配置 `scripts/.env.feishu`(FEISHU_APP_ID、FEISHU_APP_SECRET)。
|
||||
@@ -63,3 +77,13 @@ python3 scripts/send_chapter_poster_to_feishu.py 9.xx "第X场|标题" --md "<
|
||||
|
||||
- 查看篇章结构:`python3 content_upload.py --list-structure`
|
||||
- 列出章节:`python3 content_upload.py --list-chapters`
|
||||
|
||||
### 迁移:将第102场及以后迁入 2026每日派对干货
|
||||
|
||||
若此前把 2026 场次放在第9章,可用迁移脚本批量移至「2026每日派对干货」,并按 10.01、10.02… 重新编号:
|
||||
|
||||
```bash
|
||||
cd "/Users/karuo/Documents/开发/3、自营项目/一场soul的创业实验-永平"
|
||||
python3 scripts/migrate_2026_sections.py # 仅预览
|
||||
python3 scripts/migrate_2026_sections.py --execute # 执行迁移
|
||||
```
|
||||
|
||||
@@ -31,6 +31,8 @@
|
||||
## 三、主题与结构
|
||||
|
||||
- **一个观点**:每节只表达一个核心观点,主题清晰
|
||||
- **6:3:1 内容比例(强制)**:全文内容按比例分配——**60% 第一重要**(核心主题,紧扣本节标题)、**30% 第二重要**(支撑、案例、延伸)、**10% 其它**(点缀、过渡、一笔带过)。成稿后自查:核心是否占六成、次要三成、其它一成。
|
||||
- **中间核心干货块(强制)**:约 50% 处插入**一段浓缩要点**,3~5 条可执行、紧扣本节主题,单独成段、用 `---` 与其他段分隔;**不要出现「干货」二字**,直接写金句式要点,每条约 20~40 字。
|
||||
- **有数据**:用具体数值验证(场观、人数、时长、营收等);**数值与场景必须具体**,不写笼统表述,有提到的算法、成本、比例都要带具体数字写进去(如 Soul 推流 75 曝光进 1 人、3 万曝光/天、投流 1000 曝光 6~10 块等)
|
||||
- **与目录一致**:小节名称、内容与全书结构、其他小节风格统一
|
||||
- **时间**:以文档/聊天记录时间为准
|
||||
@@ -95,5 +97,6 @@
|
||||
- 对话、细节、观点分行,避免大段堆砌
|
||||
- 用 `---` 做段落分隔(与全书一致)
|
||||
- **分享句**:全文约 20% 处一句(≤50 字)、结尾一句(≤50 字、围绕主题);**不用「干货」二字及「干货:」等格式**,直接一句金句
|
||||
- **6:3:1 + 核心干货**:内容比例 60% 核心 / 30% 次要 / 10% 其它;约 50% 处插入核心干货块(3~5 条、紧扣主题、可执行),见第三节
|
||||
- **文件名(强制)**:第9章场次可用「第X场.md」或「第X场-短句.md」体现刺激性观点/效率;短句**不含空格、逗号、全角符号**(如 `第114场-有AI差1万倍.md`、`第115场-可控的事先做.md`),标题同步放在正文第一行,否则部分编辑器会报错。
|
||||
- 写作与改写第9章场次时,**必须先读本规范**,并**先阅读运营报表**,报表上有数字的(时长、场观、进房等)都写进开篇数据段、写清楚;以后写 Soul 派对场次文章都用这套
|
||||
|
||||
@@ -18,6 +18,8 @@ updated: "2026-03-02"
|
||||
|
||||
**一键将工作日志写入飞书知识库,全程静默自动,无需任何手动操作**
|
||||
|
||||
**飞书日志固定写入**:一律写入 [该链接](https://cunkebao.feishu.cn/wiki/ZdSBwHrsGii14HkcIbccQ0flnee),**本地不保留日志正文**。写入前会检查文档标题是否含当月(如「3月」);若文档月份与当月不符,先迁:在飞书新建当月文档,并用 `set-march-token` 写入新文档 token 后重试。
|
||||
|
||||
---
|
||||
|
||||
## 一键使用(推荐)
|
||||
@@ -221,6 +223,14 @@ def get_today_tasks():
|
||||
|
||||
---
|
||||
|
||||
## Soul 发到素材库(子技能 · 可打包基因胶囊)
|
||||
|
||||
将 Soul 派对成片目录(含 `目录索引.md` 与 mp4)写入飞书内容看板:**标题**(119场 3月8日 第N场 标题)、**时间**、**进展状态**、**附件**(mp4 自动上传 drive)、**多平台描述**(抖音/小红书/视频号)。支持 `--update-existing` 更新已有记录。
|
||||
|
||||
**完整说明**:见 [Soul发到素材库_SKILL.md](./Soul发到素材库_SKILL.md)。基因胶囊:`pack "02_卡人(水)/水桥_平台对接/飞书管理/Soul发到素材库_SKILL.md"`。
|
||||
|
||||
---
|
||||
|
||||
## 视频智能切片(新增)
|
||||
|
||||
### 功能
|
||||
|
||||
111
02_卡人(水)/水桥_平台对接/飞书管理/Soul发到素材库_SKILL.md
Normal file
111
02_卡人(水)/水桥_平台对接/飞书管理/Soul发到素材库_SKILL.md
Normal file
@@ -0,0 +1,111 @@
|
||||
---
|
||||
name: Soul发到素材库
|
||||
description: Soul 派对成片切片→飞书内容看板。标题/时间/进展状态 + 附件(mp4 上传 drive)+ 多平台描述(抖音/小红书/视频号)。支持新建与更新已有记录。可打包为基因胶囊。
|
||||
triggers: Soul发到素材库、成片发飞书、切片发飞书、视频分发飞书、发到素材库
|
||||
parent: 飞书管理
|
||||
owner: 水桥
|
||||
group: 水
|
||||
version: "1.0"
|
||||
updated: "2026-03-10"
|
||||
---
|
||||
|
||||
# Soul 发到素材库 · 基因胶囊
|
||||
|
||||
> **一句话**:Soul 派对成片目录(含 目录索引.md 与 mp4)→ 飞书知识库多维表格(内容看板),含**附件上传**、**多平台描述**(抖音/小红书/视频号),支持新建与更新已有记录。
|
||||
|
||||
---
|
||||
|
||||
## 一、功能概览
|
||||
|
||||
| 功能 | 说明 |
|
||||
|:---|:---|
|
||||
| **标题** | 格式:119场 3月8日 第N场 标题 |
|
||||
| **时间** | 真实直播日期 YYYY-MM-DD |
|
||||
| **进展状态** | 看板分组,如 2026年3月 |
|
||||
| **附件** | mp4 自动上传到飞书 drive 并写入附件字段 |
|
||||
| **你的解决方案** | 多平台描述:抖音、小红书、视频号(标题+描述+话题) |
|
||||
|
||||
---
|
||||
|
||||
## 二、脚本路径与前置
|
||||
|
||||
| 项 | 值 |
|
||||
|:---|:---|
|
||||
| 脚本 | `02_卡人(水)/水桥_平台对接/飞书管理/脚本/feishu_slice_upload_to_wiki_table.py` |
|
||||
| 飞书 Token | 同目录 `.feishu_tokens.json`(与 write_today_three_focus 共用) |
|
||||
| 成片目录 | 含 `目录索引.md`(序号|标题|Hook|CTA)与 mp4 文件 |
|
||||
| 目标链接 | `https://cunkebao.feishu.cn/wiki/MKhNwmYwpi1hXIkJvfCcu31vnDh?table=tblGjpeCk1ADQMEX` |
|
||||
|
||||
---
|
||||
|
||||
## 三、一键命令
|
||||
|
||||
### 3.1 仅检查表格(不写入)
|
||||
|
||||
```bash
|
||||
cd /Users/karuo/Documents/个人/卡若AI/02_卡人(水)/水桥_平台对接/飞书管理/脚本
|
||||
python3 feishu_slice_upload_to_wiki_table.py --check-only \
|
||||
--wiki-node MKhNwmYwpi1hXIkJvfCcu31vnDh --table tblGjpeCk1ADQMEX
|
||||
```
|
||||
|
||||
### 3.2 新建上传(新建记录 + 附件 + 多平台描述)
|
||||
|
||||
```bash
|
||||
python3 feishu_slice_upload_to_wiki_table.py \
|
||||
--wiki-node MKhNwmYwpi1hXIkJvfCcu31vnDh --table tblGjpeCk1ADQMEX \
|
||||
--clips-dir "/Users/karuo/Movies/soul视频/soul 派对 119场 20260309_output/成片" \
|
||||
--session 119 --date 2026-03-08 --group "2026年3月"
|
||||
```
|
||||
|
||||
### 3.3 更新已有记录(补写附件 + 多平台描述)
|
||||
|
||||
```bash
|
||||
python3 feishu_slice_upload_to_wiki_table.py --update-existing \
|
||||
--wiki-node MKhNwmYwpi1hXIkJvfCcu31vnDh --table tblGjpeCk1ADQMEX \
|
||||
--clips-dir "/Users/karuo/Movies/soul视频/soul 派对 119场 20260309_output/成片" \
|
||||
--session 119 --date 2026-03-08
|
||||
```
|
||||
|
||||
### 3.4 仅写标题/时间/描述,不上传附件
|
||||
|
||||
```bash
|
||||
python3 feishu_slice_upload_to_wiki_table.py \
|
||||
--clips-dir "..." --session 119 --date 2026-03-08 --no-upload-attachment
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 四、多平台描述格式
|
||||
|
||||
脚本自动从 `目录索引.md` 的 Hook/CTA 生成并写入「你的解决方案」:
|
||||
|
||||
```
|
||||
【抖音】
|
||||
标题:xxx(≤28字)。#Soul派对 #创业日记 #晨间直播 #私域干货 #卡若创业派对
|
||||
描述:Hook。关注我,每天学一招私域干货
|
||||
|
||||
【小红书】
|
||||
标题:xxx(≤20字)
|
||||
正文:Hook。关注我... #Soul派对 ...
|
||||
|
||||
【视频号】
|
||||
描述:Hook。关注我... #Soul派对 ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 五、基因胶囊
|
||||
|
||||
本 Skill 可打包为基因胶囊,供其他 Agent/项目继承:
|
||||
|
||||
```bash
|
||||
cd /Users/karuo/Documents/个人/卡若AI
|
||||
python3 "05_卡土(土)/土砖_技能复制/基因胶囊/脚本/gene_capsule.py" pack \
|
||||
"02_卡人(水)/水桥_平台对接/飞书管理/Soul发到素材库_SKILL.md"
|
||||
```
|
||||
|
||||
解包继承:
|
||||
|
||||
```bash
|
||||
python3 .../gene_capsule.py unpack Soul发到素材库_*.json -o <目标目录>
|
||||
```
|
||||
@@ -1,40 +0,0 @@
|
||||
# 3月2日 · 飞书日志正文(补全版,百分比已写清)
|
||||
|
||||
> 若已配置 `FEISHU_MARCH_WIKI_TOKEN`,可直接运行:
|
||||
> `python3 脚本/write_0302_feishu_log.py` 写入 3 月文档。
|
||||
> 未配置时,可把下方内容复制到飞书 3 月文档中「3月2日」下。
|
||||
|
||||
---
|
||||
|
||||
## [重要紧急] 卡若(今日复盘、本月与最终目标、今日核心、一人公司、玩值电竞)
|
||||
|
||||
**T(目标)**
|
||||
- 昨日 3月1日:一人公司 5%、玩值电竞 25%、飞书日志 100%
|
||||
- 本月目标约 **12%**,距最终目标差 **88%**(相对 2026 年总目标 100%)
|
||||
- 一人公司 Agent → 视频切片/文章/直播/小程序/朋友圈/聚合 **5%**
|
||||
- 玩值电竞 → Docker/功能推进 **25%**
|
||||
- 今日核心:每天 20 条 Soul 视频 + 20:00 发 1 条朋友圈
|
||||
|
||||
**N(过程)**
|
||||
- 【复盘】从聊天记录与今日文档统一整理;昨日目标与今年总目标一致
|
||||
- 【3月突破执行】本月/最终目标百分比已按 2026年整体目标 写入
|
||||
- 【今日】20 条视频 + 1 条朋友圈;一人公司第一、玩值电竞第二
|
||||
|
||||
**T(思考)**
|
||||
- 今日一条核心:20 条 Soul 视频 + 8 点 1 条朋友圈,持续拉齐与最终目标
|
||||
- 百分比均相对总目标:本月 12%、一人公司 5%、玩值电竞 25%
|
||||
|
||||
**W(工作)**
|
||||
- 20 条 Soul 视频
|
||||
- 20:00 发 1 条朋友圈
|
||||
- 一人公司 / 玩值电竞推进
|
||||
- 飞书日志
|
||||
|
||||
**F(反馈)**
|
||||
- 本月/最终目标 **12% / 100%**,差 **88%**
|
||||
- 一人公司 **5%** 🔄 | 玩值电竞 **25%** 🔄
|
||||
- 今日核心→20 条 Soul + 8 点朋友圈 🔄
|
||||
|
||||
---
|
||||
|
||||
*脚本:`脚本/write_0302_feishu_log.py`;写前请读 运营中枢/工作台/2026年整体目标.md*
|
||||
@@ -1,69 +0,0 @@
|
||||
# 3月5日 · 飞书日志正文(三件事 + 前面未完成)
|
||||
|
||||
> 自动写入需配置 `FEISHU_MARCH_WIKI_TOKEN` 后执行:
|
||||
> `python3 脚本/write_today_three_focus.py`
|
||||
> 未配置时可将下方内容复制到飞书 3 月文档中「3月5日」下。
|
||||
|
||||
---
|
||||
|
||||
## [重要紧急] 卡若(今日三件事、前面未完成、本月与最终目标)
|
||||
|
||||
**T(目标)**
|
||||
- ① 卡若AI 启动完善,并且接口可用
|
||||
- ② 一场创业实验:网站、小程序可上线(可使用)
|
||||
- ③ 玩值电竞:完成接下来的布局
|
||||
- 本月目标约 **12%**,距最终目标差 **88%**(相对 2026 年总目标 100%)
|
||||
|
||||
**N(过程)**
|
||||
- 【今日三件事】卡若AI 完善与接口可用;一场创业实验 网站/小程序上线;玩值电竞 布局
|
||||
- 【前面未完成】已并入下方列表,今日一并推进或延续
|
||||
|
||||
**T(思考)**
|
||||
- 三件事为今日主线;前面未完成项持续迭代,不丢项
|
||||
|
||||
**W(工作)**
|
||||
- 卡若AI 完善 + 接口可用
|
||||
- 一场创业实验 网站/小程序上线
|
||||
- 玩值电竞 接下来布局
|
||||
- 前面未完成项(见下列表)
|
||||
- 飞书日志
|
||||
|
||||
**F(反馈)**
|
||||
- ① 卡若AI 完善/接口可用 → 进行中 🔄
|
||||
- ② 一场创业实验 网站/小程序 → 进行中 🔄
|
||||
- ③ 玩值电竞 布局 → 进行中 🔄
|
||||
- 本月/最终 12% / 100%,差 88%
|
||||
|
||||
---
|
||||
|
||||
## [重要不紧急] 前面未完成项列表
|
||||
|
||||
**T(目标)**
|
||||
前面未完成,今日可延续或补做
|
||||
|
||||
**N(过程)**
|
||||
- 【未完成】20 条 Soul 视频 + 20:00 发 1 条朋友圈(每日固定)
|
||||
- 【未完成】视频 Skill 四屏切片 20 条/日
|
||||
- 【未完成】一人公司 Agent 推进(当前约 5%)
|
||||
- 【未完成】玩值电竞 Docker/功能推进(当前约 25%)
|
||||
- 【未完成】卡若AI 4 项优化 / 接口与网站持续推进
|
||||
|
||||
**T(思考)**
|
||||
未完成项不删,只叠加到今日或后续
|
||||
|
||||
**W(工作)**
|
||||
- 20 条 Soul 视频 + 20:00 发 1 条朋友圈(每日固定)
|
||||
- 视频 Skill 四屏切片 20 条/日
|
||||
- 一人公司 Agent 推进(当前约 5%)
|
||||
- 玩值电竞 Docker/功能推进(当前约 25%)
|
||||
- 卡若AI 4 项优化 / 接口与网站持续推进
|
||||
|
||||
**F(反馈)**
|
||||
- 未完成→20 条 Soul + 8 点朋友圈 🔄
|
||||
- 未完成→四屏切片 20 条/日 🔄
|
||||
- 未完成→一人公司/玩值电竞 🔄
|
||||
- ……(见上)🔄
|
||||
|
||||
---
|
||||
|
||||
*脚本:`脚本/write_today_three_focus.py`;写前请读 运营中枢/工作台/2026年整体目标.md*
|
||||
@@ -1,38 +0,0 @@
|
||||
# 3月6日 · 飞书日志(最近进度汇总 + 每天20切片 + 成交1980全链路 + 目标百分比)
|
||||
|
||||
> 自动写入需配置 3 月文档 token 后执行:`python3 脚本/write_today_with_summary.py`
|
||||
> 未配置或写入失败时,将下方内容复制到飞书 3 月文档「3月6日」下。
|
||||
|
||||
---
|
||||
|
||||
## [重要紧急] 卡若(最近进度汇总、接下来目标、目标百分比)
|
||||
|
||||
**T(目标)**
|
||||
- 本月目标约 **12%**,距最终目标差 **88%**(相对 2026 年总目标 100%)
|
||||
- 接下来目标:**每天切片 20 个视频**(Soul 竖屏/四屏);**成交 1980 及全链路**(引流→私域→转化)
|
||||
- 一人公司约 5%、玩值电竞约 25%;今日核心与目标达成百分比见反馈
|
||||
|
||||
**N(过程)**
|
||||
- 【进度汇总】飞书 Token 全命令行(get/set-march-token)、今日日志三件事+未完成已固化
|
||||
- 【进度汇总】Soul 114/115 场纪要:后端转化优于前端、发视频+切片以量取胜、私域握在自己手上
|
||||
- 【进度汇总】卡若AI 完善与接口、一场创业实验 网站/小程序、玩值电竞布局为主线;木叶视频切片 SKILL 与四屏切片 20 条/日
|
||||
|
||||
**T(思考)**
|
||||
- 进度汇总来自全库+纪要;每天 20 切片与 1980 全链路为达成总目标的关键动作,百分比写清楚便于追踪
|
||||
|
||||
**W(工作)**
|
||||
- 每天 20 条视频切片(Soul/四屏)
|
||||
- 成交 1980 及全链路(产品/客单→引流→私域→转化)
|
||||
- 卡若AI 完善 / 一场创业实验 / 玩值电竞
|
||||
- 20:00 发 1 条朋友圈
|
||||
- 飞书日志
|
||||
|
||||
**F(反馈)**
|
||||
- 本月/最终目标 **12% / 100%**,差 **88%**
|
||||
- 每日 20 切片目标 → 当日完成度 X%(X = 完成数/20×100)🔄
|
||||
- 成交 1980 及全链路 → 进行中 🔄
|
||||
- 一人公司 5% 🔄 | 玩值电竞 25% 🔄
|
||||
|
||||
---
|
||||
|
||||
*脚本:`脚本/write_today_with_summary.py`;写前请读 运营中枢/工作台/2026年整体目标.md*
|
||||
@@ -1,10 +0,0 @@
|
||||
# 3月9日 飞书日志
|
||||
## 远志(玩值)
|
||||
- 视频剪辑→切片→分发全网,目标 500/日
|
||||
- SOP 与视频切片做好
|
||||
## 李永平
|
||||
- 一场创业实验、yongpxu-soul 分支
|
||||
## 后台数据
|
||||
- 神射手 / 玩值电竞 见项目与端口注册表
|
||||
## Token 过期
|
||||
- 执行:python3 feishu_token_cli.py get-access-token
|
||||
@@ -1,88 +0,0 @@
|
||||
# 今日飞书日志(远志+李永平+后台+上周总结)
|
||||
|
||||
> 若 3 月 token 未配置,可复制下文到飞书 3 月文档「今日日期」下粘贴。
|
||||
> **Token 过期**:直接执行 `python3 feishu_token_cli.py get-access-token`,无需询问。
|
||||
|
||||
---
|
||||
|
||||
## [重要紧急] 远志(玩值)
|
||||
|
||||
**T(目标)**
|
||||
- 视频剪辑相关内容做成切片 → 分发到全网
|
||||
- 目标:每天发 500 个视频
|
||||
- SOP 与视频切片流程做好;接下来一个月每天要做的事列出清单
|
||||
|
||||
**N(过程)**
|
||||
- 源自远志安排:整体视频剪辑→切片→分发;以量取胜,发视频+切片为核心动作
|
||||
|
||||
**T(思考)**
|
||||
- 切片 SOP 标准化后,可规模化执行;一个月每日任务清单便于追踪与复盘
|
||||
|
||||
**W(工作)**
|
||||
- [ ] 完善视频切片 SOP(剪辑→切片→分发)
|
||||
- [ ] 制定接下来一个月每天视频任务清单
|
||||
- [ ] 执行切片并分发到全网(目标 500/日)
|
||||
|
||||
**F(反馈)**
|
||||
- [ ] SOP 进行中
|
||||
- [ ] 每日任务清单待输出
|
||||
- [ ] 500 视频/日 → 当日完成度 X%
|
||||
|
||||
---
|
||||
|
||||
## [重要紧急] 李永平
|
||||
|
||||
**T(目标)**
|
||||
- 永平交接「一场创业实验」+ yongpxu-soul 分支同步
|
||||
|
||||
**N(过程)**
|
||||
- 2/26 永平交接已启动;分支与开发进度跟进
|
||||
|
||||
**T(思考)**
|
||||
- 保持沟通,确保交接顺畅
|
||||
|
||||
**W(工作)**
|
||||
- [ ] 一场创业实验 网站/小程序进度跟进
|
||||
- [ ] yongpxu-soul 分支同步与联调
|
||||
|
||||
**F(反馈)**
|
||||
- [ ] 交接与分支 进行中
|
||||
|
||||
---
|
||||
|
||||
## [重要不紧急] 卡若(后台数据、上周总结、Token)
|
||||
|
||||
**T(目标)**
|
||||
- 后台数据链接整理与可访问性确认
|
||||
- 上周 3 月总结检查与优化,写清进度
|
||||
- Token 过期:直接执行命令处理
|
||||
|
||||
**N(过程)**
|
||||
- 后台数据:神射手 kr-users.quwanzhi.com、玩值电竞 localhost:3001,见项目与端口注册表
|
||||
- Token 过期 → 执行:`python3 feishu_token_cli.py get-access-token`
|
||||
|
||||
**T(思考)**
|
||||
- Token 过期无需询问,直接命令刷新;上周总结优化便于周复盘闭环
|
||||
|
||||
**W(工作)**
|
||||
- [ ] 后台数据链接登记/验证
|
||||
- [ ] 上周 3 月总结检查并优化进度
|
||||
- [ ] 飞书日志写入
|
||||
|
||||
**F(反馈)**
|
||||
- [x] 后台数据链接 见 00_账号与API索引、项目与端口注册表
|
||||
- [ ] 上周总结 已检查优化
|
||||
- [x] Token 已刷新
|
||||
|
||||
---
|
||||
|
||||
## 后台数据链接(快速参考)
|
||||
|
||||
| 项目 | 地址 | 说明 |
|
||||
|------|------|------|
|
||||
| 神射手 | kr-users.quwanzhi.com | 详见项目与端口注册表 |
|
||||
| 玩值电竞 | http://localhost:3001 | Docker 部署,神射手目录启动 |
|
||||
| 玩值大屏 | localhost:3034 | 项目目录 docker compose up -d |
|
||||
| n8n | http://localhost:5678 | 工作流,website 编排 |
|
||||
|
||||
详见:`运营中枢/工作台/项目与端口注册表.md`、`00_账号与API索引.md`
|
||||
8
02_卡人(水)/水桥_平台对接/飞书管理/参考资料/飞书日志_固定链接.md
Normal file
8
02_卡人(水)/水桥_平台对接/飞书管理/参考资料/飞书日志_固定链接.md
Normal file
@@ -0,0 +1,8 @@
|
||||
# 飞书日志 · 固定链接
|
||||
|
||||
> **飞书日志一律写入该链接文档,本地不保留日志正文副本。**
|
||||
|
||||
- **链接**:https://cunkebao.feishu.cn/wiki/ZdSBwHrsGii14HkcIbccQ0flnee
|
||||
- **3 月 token**:`ZdSBwHrsGii14HkcIbccQ0flnee`(已写入 `.feishu_month_wiki_tokens.json`)
|
||||
|
||||
**月份校验**:写入前会检查文档标题是否含当月(如「3月」)。若文档月份与当月不符,会提示:**请先在飞书新建当月文档,再用 `feishu_token_cli.py set-march-token <新文档token>` 后重试**(先迁一个)。
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"access_token": "u-dNsovXz6Z23G8q5DaAUYOBlh3A31ghgNXwGaYN0027hE",
|
||||
"refresh_token": "ur-eHGm3TE658MbdAGK5fO8lVlh1IH1ghMpP0GaER00230Z",
|
||||
"access_token": "u-e8JoHTLu19oWa_a_gvWTa.lh3631ghirhwGaVB00224J",
|
||||
"refresh_token": "ur-du3k5dgUBftqlHxZpLOoymlh14H1ghgpX0GaENg0234Z",
|
||||
"name": "飞书用户",
|
||||
"auth_time": "2026-03-09T05:48:48.008966"
|
||||
"auth_time": "2026-03-10T12:25:02.126154"
|
||||
}
|
||||
@@ -410,10 +410,11 @@ def write_log(token, date_str=None, tasks=None, wiki_token=None, overwrite=False
|
||||
doc_id = node['obj_token']
|
||||
doc_title = node.get('title', '')
|
||||
|
||||
# 防串月
|
||||
# 防串月:文档月份与当月不符则先迁(新建当月文档并 set-march-token)
|
||||
month = parse_month_from_date_str(date_str)
|
||||
if month and f"{month}月" not in doc_title:
|
||||
print(f"❌ 月份校验失败:{date_str} 不应写入《{doc_title}》")
|
||||
print(f"❌ 文档月份与当月不符:《{doc_title}》不含「{month}月」")
|
||||
print(f" 请先在飞书新建当月文档,再用 feishu_token_cli.py set-march-token <新文档token> 后重试")
|
||||
return False
|
||||
|
||||
r = requests.get(f"https://open.feishu.cn/open-apis/docx/v1/documents/{doc_id}/blocks",
|
||||
|
||||
426
02_卡人(水)/水桥_平台对接/飞书管理/脚本/feishu_slice_upload_to_wiki_table.py
Normal file
426
02_卡人(水)/水桥_平台对接/飞书管理/脚本/feishu_slice_upload_to_wiki_table.py
Normal file
@@ -0,0 +1,426 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Soul 派对成片切片 → 飞书知识库多维表格(内容看板)上传。
|
||||
|
||||
- 目标链接格式:https://cunkebao.feishu.cn/wiki/{wiki_node_token}?table={table_id}&view=...
|
||||
- 通过 get_node 取得 obj_token 作为 bitable app_token,再向该 table 追加记录。
|
||||
- 记录格式:标题 = 「119场 3月8日 第N场 标题」;时间 = 真实日期;分组 = 2026年3月;附件 = 成片 mp4(需先上传 drive)。
|
||||
|
||||
用法:
|
||||
# 仅检查表格字段结构(不写入)
|
||||
python3 feishu_slice_upload_to_wiki_table.py --check-only --wiki-node MKhNwmYwpi1hXIkJvfCcu31vnDh --table tblGjpeCk1ADQMEX
|
||||
|
||||
# 指定成片目录并上传
|
||||
python3 feishu_slice_upload_to_wiki_table.py \
|
||||
--wiki-node MKhNwmYwpi1hXIkJvfCcu31vnDh \
|
||||
--table tblGjpeCk1ADQMEX \
|
||||
--clips-dir "/Users/karuo/Movies/soul视频/soul 派对 119场 20260309_output/成片" \
|
||||
--session 119 --date 2026-03-08
|
||||
"""
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import re
|
||||
import argparse
|
||||
import requests
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
SCRIPT_DIR = Path(__file__).resolve().parent
|
||||
sys.path.insert(0, str(SCRIPT_DIR))
|
||||
|
||||
# 复用飞书 Wiki 的 Token 逻辑(用户身份,bitable 需 user token)
|
||||
from feishu_wiki_create_doc import get_token, load_tokens, CONFIG
|
||||
|
||||
# 默认目标(内容看板)
|
||||
DEFAULT_WIKI_NODE = "MKhNwmYwpi1hXIkJvfCcu31vnDh"
|
||||
DEFAULT_TABLE_ID = "tblGjpeCk1ADQMEX"
|
||||
# 成片目录默认
|
||||
DEFAULT_CLIPS_DIR = "/Users/karuo/Movies/soul视频/soul 派对 119场 20260309_output/成片"
|
||||
|
||||
|
||||
def get_node_and_app_token(user_token: str, wiki_node_token: str):
|
||||
"""通过 get_node 获取知识库节点信息;若为 bitable 则返回 app_token。"""
|
||||
r = requests.get(
|
||||
"https://open.feishu.cn/open-apis/wiki/v2/spaces/get_node",
|
||||
params={"token": wiki_node_token},
|
||||
headers={"Authorization": f"Bearer {user_token}", "Content-Type": "application/json"},
|
||||
timeout=15,
|
||||
)
|
||||
data = r.json()
|
||||
if data.get("code") != 0:
|
||||
return None, data.get("msg", "get_node 失败")
|
||||
node = data.get("data", {}).get("node", {})
|
||||
obj_type = node.get("obj_type")
|
||||
obj_token = node.get("obj_token")
|
||||
if obj_type != "bitable" or not obj_token:
|
||||
return None, f"该节点不是多维表格或缺少 obj_token(obj_type={obj_type})"
|
||||
return obj_token, None
|
||||
|
||||
|
||||
def list_table_fields(user_token: str, app_token: str, table_id: str):
|
||||
"""获取多维表格的字段列表,用于核对「标题」「时间」「附件」「进展状态」等。"""
|
||||
r = requests.get(
|
||||
f"https://open.feishu.cn/open-apis/bitable/v1/apps/{app_token}/tables/{table_id}/fields",
|
||||
headers={"Authorization": f"Bearer {user_token}", "Content-Type": "application/json"},
|
||||
timeout=15,
|
||||
)
|
||||
data = r.json()
|
||||
if data.get("code") != 0:
|
||||
return None, data.get("msg", "list fields 失败")
|
||||
items = data.get("data", {}).get("items", [])
|
||||
return items, None
|
||||
|
||||
|
||||
def parse_index_md(clips_dir: Path):
|
||||
"""从 目录索引.md 解析 序号 -> (标题, Hook, CTA)。"""
|
||||
index_file = clips_dir / "目录索引.md"
|
||||
if not index_file.exists():
|
||||
return {}
|
||||
text = index_file.read_text(encoding="utf-8")
|
||||
rows = []
|
||||
in_table = False
|
||||
for line in text.splitlines():
|
||||
line = line.strip()
|
||||
if line.startswith("|") and "标题" in line and "Hook" in line:
|
||||
in_table = True
|
||||
continue
|
||||
if in_table and line.startswith("|"):
|
||||
parts = [p.strip() for p in line.split("|") if p.strip()]
|
||||
if len(parts) >= 2 and parts[0].isdigit():
|
||||
rows.append((int(parts[0]), parts[1], parts[2] if len(parts) > 2 else "", parts[3] if len(parts) > 3 else ""))
|
||||
if in_table and not line.startswith("|") and line:
|
||||
break
|
||||
return {r[0]: {"title": r[1], "hook": r[2], "cta": r[3]} for r in rows}
|
||||
|
||||
|
||||
def build_multi_platform_desc(meta: dict, title: str, cta: str = "关注我,每天学一招私域干货") -> str:
|
||||
"""生成多平台发布描述(抖音、小红书、视频号)。"""
|
||||
hook = meta.get("hook", "") or title
|
||||
cta = meta.get("cta") or cta
|
||||
# 通用话题
|
||||
tags = "#Soul派对 #创业日记 #晨间直播 #私域干货 #卡若创业派对"
|
||||
# 抖音:标题≤30字 + 描述 + 话题
|
||||
dy_title = (title[:28] + "。" if len(title) > 28 else title) + f" {tags}"
|
||||
dy_desc = f"{hook}。{cta}"
|
||||
# 小红书:标题+正文
|
||||
xhs_title = title[:20] if len(title) > 20 else title
|
||||
xhs_body = f"{hook}。{cta} {tags}"
|
||||
# 视频号:类似抖音
|
||||
sp_desc = f"{hook}。{cta} {tags}"
|
||||
return (
|
||||
f"【抖音】\n标题:{dy_title}\n描述:{dy_desc}\n"
|
||||
f"【小红书】\n标题:{xhs_title}\n正文:{xhs_body}\n"
|
||||
f"【视频号】\n描述:{sp_desc}"
|
||||
)
|
||||
|
||||
|
||||
def collect_clips(clips_dir: Path, session: int, date_str: str):
|
||||
"""
|
||||
收集成片目录下的切片信息。
|
||||
date_str: 如 2026-03-08
|
||||
返回 list of dict: title_display, field_title, time_text, file_path, description, index
|
||||
"""
|
||||
clips_dir = Path(clips_dir)
|
||||
index_map = parse_index_md(clips_dir)
|
||||
# 解析日期用于显示
|
||||
try:
|
||||
from datetime import datetime
|
||||
dt = datetime.strptime(date_str, "%Y-%m-%d")
|
||||
date_cn = f"{dt.month}月{dt.day}日"
|
||||
except Exception:
|
||||
date_cn = date_str
|
||||
|
||||
records = []
|
||||
mp4_files = sorted(clips_dir.glob("*.mp4"))
|
||||
for i, fp in enumerate(mp4_files, 1):
|
||||
# 文件名即标题(无扩展名)
|
||||
name_stem = fp.stem
|
||||
meta = index_map.get(i, {})
|
||||
title_from_index = meta.get("title", name_stem)
|
||||
# 展示标题:119场 3月8日 第N场 标题
|
||||
field_title = f"{session}场 {date_cn} 第{i}场 {title_from_index}"
|
||||
# 描述:附件格式「描述,标题,附件」—— 用 Hook/CTA 或文件名
|
||||
desc = meta.get("hook", "") or name_stem
|
||||
if meta.get("cta"):
|
||||
desc = f"{desc};{meta['cta']}" if desc else meta["cta"]
|
||||
multi_platform = build_multi_platform_desc(meta, title_from_index)
|
||||
records.append({
|
||||
"index": i,
|
||||
"field_title": field_title,
|
||||
"time_text": date_str,
|
||||
"file_path": fp,
|
||||
"description": desc or name_stem,
|
||||
"multi_platform_desc": multi_platform,
|
||||
})
|
||||
return records
|
||||
|
||||
|
||||
def upload_media_to_feishu(user_token: str, app_token: str, file_path: Path) -> str | None:
|
||||
"""上传视频/文件到飞书 drive,用于 bitable 附件。返回 file_token。"""
|
||||
if not file_path.exists():
|
||||
return None
|
||||
size = file_path.stat().st_size
|
||||
if size > 100 * 1024 * 1024:
|
||||
print(f" ⚠️ 文件过大跳过: {file_path.name} ({size // 1024 // 1024}MB)")
|
||||
return None
|
||||
url = "https://open.feishu.cn/open-apis/drive/v1/medias/upload_all"
|
||||
headers = {"Authorization": f"Bearer {user_token}"}
|
||||
mime = "video/mp4" if file_path.suffix.lower() == ".mp4" else "application/octet-stream"
|
||||
try:
|
||||
with open(file_path, "rb") as f:
|
||||
files = {
|
||||
"file_name": (None, file_path.name),
|
||||
"parent_type": (None, "bitable_file"),
|
||||
"parent_node": (None, app_token),
|
||||
"size": (None, str(size)),
|
||||
"file": (file_path.name, f, mime),
|
||||
}
|
||||
r = requests.post(url, headers=headers, files=files, timeout=180)
|
||||
except Exception as e:
|
||||
print(f" ⚠️ 上传异常 {file_path.name}: {e}")
|
||||
return None
|
||||
data = r.json()
|
||||
if data.get("code") == 0:
|
||||
return data.get("data", {}).get("file_token")
|
||||
# bitable_file 不支持时尝试 explorer
|
||||
if "parent" in (data.get("msg") or "").lower() or data.get("code") in (1254999, 1254003, 1254002):
|
||||
try:
|
||||
with open(file_path, "rb") as f:
|
||||
files = {
|
||||
"file_name": (None, file_path.name),
|
||||
"parent_type": (None, "explorer"),
|
||||
"parent_node": (None, app_token),
|
||||
"size": (None, str(size)),
|
||||
"file": (file_path.name, f, mime),
|
||||
}
|
||||
r2 = requests.post(url, headers=headers, files=files, timeout=180)
|
||||
d2 = r2.json()
|
||||
if d2.get("code") == 0:
|
||||
return d2.get("data", {}).get("file_token")
|
||||
except Exception:
|
||||
pass
|
||||
print(f" ⚠️ 上传失败 {file_path.name}: {data.get('msg')}")
|
||||
return None
|
||||
|
||||
|
||||
def create_records(
|
||||
user_token: str, app_token: str, table_id: str, records: list,
|
||||
group_value: str, field_map: dict, upload_attachment: bool = True
|
||||
):
|
||||
"""
|
||||
批量向多维表格添加记录。
|
||||
field_map: 飞书字段名 -> 我们用的 key
|
||||
upload_attachment: 是否上传 mp4 到附件字段
|
||||
"""
|
||||
created = 0
|
||||
for rec in records:
|
||||
fields = {}
|
||||
for feishu_name, our_key in field_map.items():
|
||||
if our_key == "field_title":
|
||||
fields[feishu_name] = rec["field_title"]
|
||||
elif our_key == "time_text":
|
||||
fields[feishu_name] = rec["time_text"]
|
||||
elif our_key == "group":
|
||||
fields[feishu_name] = group_value
|
||||
elif our_key == "multi_platform" and "multi_platform_desc" in rec:
|
||||
fields[feishu_name] = rec["multi_platform_desc"]
|
||||
elif our_key == "description" and "description" in rec:
|
||||
fields[feishu_name] = rec["description"]
|
||||
elif our_key == "attachment" and upload_attachment and rec.get("file_path"):
|
||||
ft = upload_media_to_feishu(user_token, app_token, rec["file_path"])
|
||||
if ft:
|
||||
fields[feishu_name] = [{"file_token": ft}]
|
||||
time.sleep(0.3)
|
||||
body = {"fields": fields}
|
||||
r = requests.post(
|
||||
f"https://open.feishu.cn/open-apis/bitable/v1/apps/{app_token}/tables/{table_id}/records",
|
||||
headers={"Authorization": f"Bearer {user_token}", "Content-Type": "application/json"},
|
||||
json=body,
|
||||
timeout=15,
|
||||
)
|
||||
data = r.json()
|
||||
if data.get("code") == 0:
|
||||
created += 1
|
||||
print(f" ✅ 第{rec['index']}条: {rec['field_title'][:50]}...")
|
||||
else:
|
||||
print(f" ❌ 第{rec['index']}条 失败: {data.get('msg', data)}")
|
||||
time.sleep(0.2)
|
||||
return created
|
||||
|
||||
|
||||
def list_records_by_title(user_token: str, app_token: str, table_id: str, title_prefix: str):
|
||||
"""列出标题以 title_prefix 开头的记录(如 119场 3月8日),返回 [(record_id, 第N场), ...]。"""
|
||||
all_records = []
|
||||
page_token = None
|
||||
while True:
|
||||
params = {"page_size": 100}
|
||||
if page_token:
|
||||
params["page_token"] = page_token
|
||||
r = requests.get(
|
||||
f"https://open.feishu.cn/open-apis/bitable/v1/apps/{app_token}/tables/{table_id}/records",
|
||||
headers={"Authorization": f"Bearer {user_token}", "Content-Type": "application/json"},
|
||||
params=params,
|
||||
timeout=15,
|
||||
)
|
||||
data = r.json()
|
||||
if data.get("code") != 0:
|
||||
return []
|
||||
items = data.get("data", {}).get("items", [])
|
||||
for it in items:
|
||||
raw = (it.get("fields") or {}).get("标题", "")
|
||||
if isinstance(raw, list) and raw:
|
||||
title = raw[0].get("text", "") if isinstance(raw[0], dict) else str(raw[0])
|
||||
else:
|
||||
title = str(raw) if raw else ""
|
||||
if title.startswith(title_prefix):
|
||||
m = re.search(r"第(\d+)场", str(title))
|
||||
idx = int(m.group(1)) if m else 0
|
||||
all_records.append((it.get("record_id"), idx, title))
|
||||
page_token = data.get("data", {}).get("page_token") or data.get("data", {}).get("next_page_token")
|
||||
if not page_token or not items:
|
||||
break
|
||||
return all_records
|
||||
|
||||
|
||||
def update_existing_records(
|
||||
user_token: str, app_token: str, table_id: str, records: list,
|
||||
session: int, date_str: str, field_map: dict, upload_attachment: bool = True
|
||||
):
|
||||
"""
|
||||
更新已有记录:按标题匹配「session场 date_cn 第N场」,补写 附件 + 你的解决方案(多平台描述)。
|
||||
"""
|
||||
try:
|
||||
from datetime import datetime
|
||||
dt = datetime.strptime(date_str, "%Y-%m-%d")
|
||||
date_cn = f"{dt.month}月{dt.day}日"
|
||||
except Exception:
|
||||
date_cn = date_str
|
||||
title_prefix = f"{session}场 {date_cn}"
|
||||
existing = list_records_by_title(user_token, app_token, table_id, title_prefix)
|
||||
if not existing:
|
||||
print(" ⚠️ 未找到匹配的已有记录,请先执行新建上传")
|
||||
return 0
|
||||
rec_by_idx = {r["index"]: r for r in records}
|
||||
updated = 0
|
||||
for record_id, idx, _ in existing:
|
||||
rec = rec_by_idx.get(idx)
|
||||
if not rec:
|
||||
continue
|
||||
fields = {}
|
||||
for feishu_name, our_key in field_map.items():
|
||||
if our_key == "multi_platform" and "multi_platform_desc" in rec:
|
||||
fields[feishu_name] = rec["multi_platform_desc"]
|
||||
elif our_key == "attachment" and upload_attachment and rec.get("file_path"):
|
||||
ft = upload_media_to_feishu(user_token, app_token, rec["file_path"])
|
||||
if ft:
|
||||
fields[feishu_name] = [{"file_token": ft}]
|
||||
time.sleep(0.3)
|
||||
if not fields:
|
||||
continue
|
||||
r = requests.put(
|
||||
f"https://open.feishu.cn/open-apis/bitable/v1/apps/{app_token}/tables/{table_id}/records/{record_id}",
|
||||
headers={"Authorization": f"Bearer {user_token}", "Content-Type": "application/json"},
|
||||
json={"fields": fields},
|
||||
timeout=30,
|
||||
)
|
||||
data = r.json()
|
||||
if data.get("code") == 0:
|
||||
updated += 1
|
||||
print(f" ✅ 第{idx}条 已更新: 附件+多平台描述")
|
||||
else:
|
||||
print(f" ❌ 第{idx}条 更新失败: {data.get('msg')}")
|
||||
time.sleep(0.2)
|
||||
return updated
|
||||
|
||||
|
||||
def main():
|
||||
ap = argparse.ArgumentParser(description="Soul 成片切片上传到飞书知识库多维表格")
|
||||
ap.add_argument("--wiki-node", default=DEFAULT_WIKI_NODE, help="知识库节点 token(URL 中 wiki/ 后)")
|
||||
ap.add_argument("--table", default=DEFAULT_TABLE_ID, help="多维表格 table_id(URL 中 table=)")
|
||||
ap.add_argument("--clips-dir", default=DEFAULT_CLIPS_DIR, help="成片目录(含 目录索引.md 与 mp4)")
|
||||
ap.add_argument("--session", type=int, default=119, help="场次,如 119")
|
||||
ap.add_argument("--date", default="2026-03-08", help="直播日期 YYYY-MM-DD")
|
||||
ap.add_argument("--group", default="2026年3月", help="看板分组值,如 2026年3月")
|
||||
ap.add_argument("--check-only", action="store_true", help="仅检查表格字段结构,不写入")
|
||||
ap.add_argument("--no-upload-attachment", action="store_true", help="不上传视频到附件字段(仅写标题/时间/描述)")
|
||||
ap.add_argument("--update-existing", action="store_true", help="仅更新已有记录,补写附件+多平台描述(按标题匹配)")
|
||||
args = ap.parse_args()
|
||||
|
||||
user_token = get_token(args.wiki_node)
|
||||
if not user_token:
|
||||
print("❌ 无法获取飞书用户 Token,请先完成授权(如运行 write_today_three_focus 或 feishu_api 授权)")
|
||||
sys.exit(1)
|
||||
|
||||
app_token, err = get_node_and_app_token(user_token, args.wiki_node)
|
||||
if err:
|
||||
print(f"❌ 获取多维表格失败: {err}")
|
||||
sys.exit(1)
|
||||
print(f"✅ 多维表格 app_token: {app_token[:20]}...")
|
||||
|
||||
fields, err = list_table_fields(user_token, app_token, args.table)
|
||||
if err:
|
||||
print(f"❌ 获取字段列表失败: {err}")
|
||||
sys.exit(1)
|
||||
|
||||
print("\n📋 当前表格字段(用于映射 标题/时间/进展状态/附件/描述):")
|
||||
for f in fields:
|
||||
name = f.get("field_name", "")
|
||||
typ = f.get("type", "")
|
||||
fid = f.get("field_id", "")
|
||||
print(f" - {name} (type={typ}, id={fid})")
|
||||
|
||||
if args.check_only:
|
||||
print("\n✅ 仅检查完成,未写入。去掉 --check-only 可执行上传。")
|
||||
return
|
||||
|
||||
clips_dir = Path(args.clips_dir)
|
||||
if not clips_dir.is_dir():
|
||||
print(f"❌ 成片目录不存在: {clips_dir}")
|
||||
sys.exit(1)
|
||||
|
||||
records = collect_clips(clips_dir, args.session, args.date)
|
||||
if not records:
|
||||
print("❌ 未在成片目录下找到任何 mp4 或索引")
|
||||
sys.exit(1)
|
||||
print(f"\n📁 共 {len(records)} 条切片待写入,分组为「{args.group}」")
|
||||
|
||||
# 根据当前表格字段映射(标题、时间、进展状态、多平台描述、附件)
|
||||
field_map = {
|
||||
"标题": "field_title",
|
||||
"时间": "time_text",
|
||||
"进展状态": "group",
|
||||
}
|
||||
name_set = {f.get("field_name") for f in fields}
|
||||
if "你的解决方案" in name_set:
|
||||
field_map["你的解决方案"] = "multi_platform" # 抖音/小红书/视频号 多平台描述
|
||||
elif "描述" in name_set or "内容提炼" in name_set:
|
||||
field_map["描述" if "描述" in name_set else "内容提炼"] = "multi_platform"
|
||||
if "附件" in name_set and not args.no_upload_attachment:
|
||||
field_map["附件"] = "attachment"
|
||||
|
||||
upload_attach = not args.no_upload_attachment
|
||||
|
||||
if args.update_existing:
|
||||
print(f"\n📁 更新已有记录:补写附件 + 多平台描述(共 {len(records)} 条待匹配)")
|
||||
updated = update_existing_records(
|
||||
user_token, app_token, args.table, records,
|
||||
args.session, args.date,
|
||||
{k: v for k, v in field_map.items() if v in ("multi_platform", "attachment")},
|
||||
upload_attachment=upload_attach,
|
||||
)
|
||||
print(f"\n✅ 已更新 {updated} 条记录。")
|
||||
print(f" 链接: https://cunkebao.feishu.cn/wiki/{args.wiki_node}?table={args.table}")
|
||||
return
|
||||
|
||||
created = create_records(
|
||||
user_token, app_token, args.table, records, args.group, field_map,
|
||||
upload_attachment=upload_attach
|
||||
)
|
||||
attach_note = "" if upload_attach else ";附件未上传(已用 --no-upload-attachment 跳过)"
|
||||
print(f"\n✅ 已写入 {created}/{len(records)} 条记录。多平台描述已填入「你的解决方案」,附件已上传{attach_note}。")
|
||||
print(f" 链接: https://cunkebao.feishu.cn/wiki/{args.wiki_node}?table={args.table}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
今日飞书日志(3月21日定制):远志视频切片500/日、李永平、后台数据、上周3月总结优化、Token过期处理
|
||||
今日飞书日志(3月定制):200视频/日、工具研发10~30切片、售内容产出、李永平、年度目标百分比
|
||||
"""
|
||||
import sys
|
||||
from datetime import datetime
|
||||
@@ -9,39 +9,41 @@ from pathlib import Path
|
||||
SCRIPT_DIR = Path(__file__).resolve().parent
|
||||
sys.path.insert(0, str(SCRIPT_DIR))
|
||||
|
||||
from auto_log import get_token_silent, write_log, open_result, resolve_wiki_token_for_date
|
||||
from auto_log import get_token_silent, write_log, open_result, resolve_wiki_token_for_date, CONFIG
|
||||
|
||||
|
||||
def build_tasks_today():
|
||||
"""今日:远志、李永平、后台数据、上周总结、Token 处理"""
|
||||
"""今日:200视频/日、工具研发10~30切片、售内容产出、按年度目标百分比"""
|
||||
today = datetime.now()
|
||||
date_str = f"{today.month}月{today.day}日"
|
||||
|
||||
return [
|
||||
{
|
||||
"person": "远志(玩值)",
|
||||
"events": ["视频剪辑→切片→分发全网", "每天500视频目标", "SOP与视频切片"],
|
||||
"events": ["每天200视频分发", "切片工具研发", "售内容产出"],
|
||||
"quadrant": "重要紧急",
|
||||
"t_targets": [
|
||||
"视频剪辑相关内容做成切片 → 分发到全网",
|
||||
"目标:每天发 500 个视频",
|
||||
"SOP 与视频切片流程做好;接下来一个月每天要做的事列出清单",
|
||||
"目标:每天发 200 个视频,工具分发到各平台",
|
||||
"工具研发:每天切 10-30 个视频的切片工具",
|
||||
"售内容:售的内容产出,按整个业务与年度目标百分比推进",
|
||||
],
|
||||
"n_process": [
|
||||
"源自远志安排:整体视频剪辑→切片→分发;以量取胜,发视频+切片为核心动作",
|
||||
"源自远志安排:200 视频/日分发;工具负责 10~30 切片/日;售内容与年度目标对齐",
|
||||
],
|
||||
"t_thoughts": [
|
||||
"切片 SOP 标准化后,可规模化执行;一个月每日任务清单便于追踪与复盘",
|
||||
"工具研发 + 售内容产出 = 支撑 200 视频/日;百分比以 2026 年整体目标为基准",
|
||||
],
|
||||
"w_work": [
|
||||
"完善视频切片 SOP(剪辑→切片→分发)",
|
||||
"制定接下来一个月每天视频任务清单",
|
||||
"执行切片并分发到全网(目标 500/日)",
|
||||
"工具分发:200 视频/日 → 各平台",
|
||||
"工具研发:每天切 10-30 个视频的切片工具",
|
||||
"售内容产出(含内容生产)",
|
||||
"按业务与年度目标百分比追踪",
|
||||
],
|
||||
"f_feedback": [
|
||||
"SOP 进行中 🔄",
|
||||
"每日任务清单待输出 🔄",
|
||||
"500 视频/日 → 当日完成度 X% 🔄",
|
||||
"200 视频/日 → 当日完成度 X%",
|
||||
"10~30 切片工具 研发中",
|
||||
"售内容产出 进行中",
|
||||
"本月/年度目标 % 见整体目标",
|
||||
],
|
||||
},
|
||||
{
|
||||
@@ -60,17 +62,17 @@ def build_tasks_today():
|
||||
"yongpxu-soul 分支同步与联调",
|
||||
],
|
||||
"f_feedback": [
|
||||
"交接与分支 进行中 🔄",
|
||||
"交接与分支 进行中",
|
||||
],
|
||||
},
|
||||
{
|
||||
"person": "卡若",
|
||||
"events": ["后台数据链接", "上周3月总结", "Token 过期处理"],
|
||||
"events": ["年度目标百分比", "后台数据", "Token 过期处理"],
|
||||
"quadrant": "重要不紧急",
|
||||
"t_targets": [
|
||||
"后台数据链接整理与可访问性确认",
|
||||
"上周 3 月总结检查与优化,写清进度",
|
||||
"Token 过期:直接执行命令处理",
|
||||
"按业务与年度目标百分比追踪(以 2026 年整体目标为基准)",
|
||||
"本月目标约 12%,距最终目标差 88%",
|
||||
"后台数据链接、Token 过期直接命令处理",
|
||||
],
|
||||
"n_process": [
|
||||
"后台数据:神射手 kr-users.quwanzhi.com、玩值电竞 localhost:3001,见项目与端口注册表",
|
||||
@@ -85,9 +87,9 @@ def build_tasks_today():
|
||||
"飞书日志写入",
|
||||
],
|
||||
"f_feedback": [
|
||||
"后台数据链接 见 00_账号与API索引、项目与端口注册表 ✅",
|
||||
"上周总结 已检查优化 🔄",
|
||||
"Token 已刷新 ✅",
|
||||
"后台数据链接 见 00_账号与API索引、项目与端口注册表",
|
||||
"上周总结 已检查优化",
|
||||
"Token 已刷新",
|
||||
],
|
||||
},
|
||||
]
|
||||
@@ -97,7 +99,7 @@ def main():
|
||||
today = datetime.now()
|
||||
date_str = f"{today.month}月{today.day}日"
|
||||
print("=" * 50)
|
||||
print(f"📝 写入今日飞书日志(远志+李永平+后台+上周总结):{date_str}")
|
||||
print(f"📝 写入今日飞书日志(200视频+工具研发+售内容+年度目标%):{date_str}")
|
||||
print("=" * 50)
|
||||
|
||||
token = get_token_silent()
|
||||
@@ -108,17 +110,14 @@ def main():
|
||||
tasks = build_tasks_today()
|
||||
target_wiki_token = resolve_wiki_token_for_date(date_str)
|
||||
ok = write_log(token, date_str, tasks, target_wiki_token, overwrite=True)
|
||||
# 无论成功失败,写完都打开飞书
|
||||
open_token = target_wiki_token or (CONFIG.get("MONTH_WIKI_TOKENS") or {}).get(2) or CONFIG.get("WIKI_TOKEN")
|
||||
open_result(open_token)
|
||||
if ok:
|
||||
open_result(target_wiki_token)
|
||||
print(f"✅ {date_str} 飞书日志已更新")
|
||||
print(f"✅ {date_str} 飞书日志已写入飞书")
|
||||
sys.exit(0)
|
||||
print("❌ 写入失败")
|
||||
ref_path = SCRIPT_DIR.parent / "参考资料" / f"{date_str}_飞书日志_远志李永平.md"
|
||||
ref_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
# 生成可粘贴的 Markdown 备用
|
||||
lines = [f"# {date_str} 飞书日志\n", "## 远志(玩值)\n", "- 视频剪辑→切片→分发全网,目标 500/日\n", "- SOP 与视频切片做好\n", "## 李永平\n", "- 一场创业实验、yongpxu-soul 分支\n", "## 后台数据\n", "- 神射手 / 玩值电竞 见项目与端口注册表\n", "## Token 过期\n", "- 执行:python3 feishu_token_cli.py get-access-token\n"]
|
||||
ref_path.write_text("".join(lines), encoding="utf-8")
|
||||
print(f"💡 可复制 {ref_path} 内容到飞书 3 月文档粘贴")
|
||||
print("❌ 写入失败(见上方提示:token/月份不符时请先迁当月文档并 set-march-token)")
|
||||
print("📎 飞书日志固定链接:https://cunkebao.feishu.cn/wiki/ZdSBwHrsGii14HkcIbccQ0flnee")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ from pathlib import Path
|
||||
SCRIPT_DIR = Path(__file__).resolve().parent
|
||||
sys.path.insert(0, str(SCRIPT_DIR))
|
||||
|
||||
from auto_log import get_token_silent, write_log, open_result, resolve_wiki_token_for_date
|
||||
from auto_log import get_token_silent, write_log, open_result, resolve_wiki_token_for_date, CONFIG
|
||||
|
||||
|
||||
def build_tasks_today_three_focus():
|
||||
@@ -94,14 +94,13 @@ def main():
|
||||
tasks = build_tasks_today_three_focus()
|
||||
target_wiki_token = resolve_wiki_token_for_date(date_str)
|
||||
ok = write_log(token, date_str, tasks, target_wiki_token, overwrite=args.overwrite)
|
||||
open_token = target_wiki_token or (CONFIG.get("MONTH_WIKI_TOKENS") or {}).get(2) or CONFIG.get("WIKI_TOKEN")
|
||||
open_result(open_token)
|
||||
if ok:
|
||||
open_result(target_wiki_token)
|
||||
print(f"✅ {date_str} 飞书日志已写入(三件事 + 前面未完成)")
|
||||
print(f"✅ {date_str} 飞书日志已写入飞书")
|
||||
sys.exit(0)
|
||||
print("❌ 写入失败")
|
||||
ref_path = SCRIPT_DIR.parent / "参考资料" / f"{date_str}_飞书日志正文_三件事与未完成.md"
|
||||
if ref_path.exists():
|
||||
print(f"💡 可复制 {ref_path} 内容到飞书 3 月文档手动粘贴")
|
||||
print("❌ 写入失败(文档月份不符时请先迁当月文档并 set-march-token)")
|
||||
print("📎 飞书日志固定链接:https://cunkebao.feishu.cn/wiki/ZdSBwHrsGii14HkcIbccQ0flnee")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ from pathlib import Path
|
||||
SCRIPT_DIR = Path(__file__).resolve().parent
|
||||
sys.path.insert(0, str(SCRIPT_DIR))
|
||||
|
||||
from auto_log import get_token_silent, write_log, open_result, resolve_wiki_token_for_date
|
||||
from auto_log import get_token_silent, write_log, open_result, resolve_wiki_token_for_date, CONFIG
|
||||
|
||||
REF_DIR = SCRIPT_DIR.parent / "参考资料"
|
||||
|
||||
@@ -155,10 +155,10 @@ def main():
|
||||
open_result(target_wiki_token)
|
||||
print(f"✅ {date_str} 飞书日志已更新(含进度汇总与目标百分比)")
|
||||
sys.exit(0)
|
||||
print("❌ 写入失败")
|
||||
ref_path = SCRIPT_DIR.parent / "参考资料" / f"{date_str}_飞书日志_进度汇总与百分比.md"
|
||||
if ref_path.exists():
|
||||
print(f"💡 可复制 {ref_path} 内容到飞书 3 月文档「{date_str}」下粘贴")
|
||||
open_token = target_wiki_token or (CONFIG.get("MONTH_WIKI_TOKENS") or {}).get(2) or CONFIG.get("WIKI_TOKEN")
|
||||
open_result(open_token)
|
||||
print("❌ 写入失败(文档月份不符时请先迁当月文档并 set-march-token)")
|
||||
print("📎 飞书日志固定链接:https://cunkebao.feishu.cn/wiki/ZdSBwHrsGii14HkcIbccQ0flnee")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
|
||||
94
03_卡木(木)/木叶_视频内容/B站发布/SKILL.md
Normal file
94
03_卡木(木)/木叶_视频内容/B站发布/SKILL.md
Normal file
@@ -0,0 +1,94 @@
|
||||
---
|
||||
name: B站发布
|
||||
description: >
|
||||
纯 API 命令行方式发布视频到 B站(不打开浏览器)。通过逆向 B站创作中心的 preupload 分片上传接口,
|
||||
实现 Cookie 认证 → preupload → 分片上传 → complete → add/v3 发布的完整链路。
|
||||
封面自动取视频第一帧。
|
||||
triggers: B站发布、发布到B站、B站登录、B站上传、bilibili发布
|
||||
owner: 木叶
|
||||
group: 木
|
||||
version: "1.0"
|
||||
updated: "2026-03-10"
|
||||
---
|
||||
|
||||
# B站发布 Skill(v1.0)
|
||||
|
||||
> **核心能力**:纯 Python 命令行,无需打开浏览器,通过 B站 preupload 系列 HTTP API 实现视频上传与发布。
|
||||
> **认证方式**:Playwright 扫码登录获取 Cookie(SESSDATA、bili_jct 等),之后全程 API 操作。
|
||||
> **适用场景**:Soul 派对切片批量分发、定时发布、自动化工作流。
|
||||
|
||||
---
|
||||
|
||||
## 一、纯 API 完整流程(5 步)
|
||||
|
||||
```
|
||||
[Step 1] Cookie 认证
|
||||
Playwright 扫码登录 → bilibili_storage_state.json
|
||||
关键 Cookie: SESSDATA, bili_jct(CSRF), DedeUserID
|
||||
|
||||
[Step 2] 获取上传节点 (preupload)
|
||||
GET member.bilibili.com/preupload
|
||||
参数: name, size, r=upos, profile=ugcfr/pc3
|
||||
返回: upos_uri, auth, endpoint, chunk_size, biz_id
|
||||
|
||||
[Step 3] 初始化上传
|
||||
POST /{upos_uri}?uploads&output=json
|
||||
返回: upload_id
|
||||
|
||||
[Step 4] 分片上传 + 确认
|
||||
PUT /{upos_uri}?partNumber=N&uploadId=...&chunk=N&chunks=total
|
||||
POST /{upos_uri}?output=json&profile=ugcfr/pc3&uploadId=...
|
||||
body: {"parts": [{"partNumber": 1, "eTag": "etag"}]}
|
||||
|
||||
[Step 5] 发布视频
|
||||
POST /x/vu/web/add/v3
|
||||
body: {title, desc, tid, tag, videos: [{filename}], cover, csrf}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 二、一键命令
|
||||
|
||||
```bash
|
||||
cd /Users/karuo/Documents/个人/卡若AI/03_卡木(木)/木叶_视频内容/B站发布/脚本
|
||||
|
||||
# 1. 首次或 Cookie 过期:扫码登录
|
||||
python3 bilibili_login.py
|
||||
|
||||
# 2. 批量发布(成片目录下所有 .mp4)
|
||||
python3 bilibili_publish.py
|
||||
|
||||
# 3. 发布单条
|
||||
python3 bilibili_publish.py --video "/path/to/video.mp4" --title "标题"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 三、Cookie 有效期
|
||||
|
||||
| Cookie | 有效期 | 说明 |
|
||||
|--------|--------|------|
|
||||
| SESSDATA | ~6 个月 | 主认证 Cookie,过期需重新扫码 |
|
||||
| bili_jct | ~6 个月 | CSRF token,提交时必带 |
|
||||
| DedeUserID | ~6 个月 | 用户 ID |
|
||||
|
||||
B站 Cookie 有效期较长(约 6 个月),相比抖音稳定得多。
|
||||
|
||||
---
|
||||
|
||||
## 四、相关文件
|
||||
|
||||
| 文件 | 说明 |
|
||||
|------|------|
|
||||
| `脚本/bilibili_publish.py` | **主脚本**:纯 API 视频上传+发布 |
|
||||
| `脚本/bilibili_login.py` | Playwright 扫码登录 |
|
||||
| `脚本/bilibili_storage_state.json` | Cookie 存储(生成后自动创建) |
|
||||
|
||||
---
|
||||
|
||||
## 五、依赖
|
||||
|
||||
- Python 3.10+
|
||||
- httpx, playwright
|
||||
- ffmpeg(封面提取)
|
||||
- 共享工具:`多平台分发/脚本/cookie_manager.py`、`video_utils.py`
|
||||
44
03_卡木(木)/木叶_视频内容/B站发布/脚本/bilibili_login.py
Normal file
44
03_卡木(木)/木叶_视频内容/B站发布/脚本/bilibili_login.py
Normal file
@@ -0,0 +1,44 @@
|
||||
#!/usr/bin/env python3
|
||||
"""B站 Cookie 获取 - Playwright 扫码登录 → 保存 storage_state"""
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
from playwright.async_api import async_playwright
|
||||
|
||||
COOKIE_FILE = Path(__file__).parent / "bilibili_storage_state.json"
|
||||
LOGIN_URL = "https://passport.bilibili.com/login"
|
||||
|
||||
UA = (
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36"
|
||||
)
|
||||
|
||||
|
||||
async def main():
|
||||
print("即将弹出浏览器,请用 B站 APP 扫码登录。")
|
||||
print("登录成功后(看到创作中心页面),按 Enter 或在 Inspector 点绿色 ▶。\n")
|
||||
|
||||
async with async_playwright() as pw:
|
||||
browser = await pw.chromium.launch(headless=False)
|
||||
context = await browser.new_context(user_agent=UA, viewport={"width": 1280, "height": 720})
|
||||
await context.add_init_script("Object.defineProperty(navigator,'webdriver',{get:()=>undefined})")
|
||||
page = await context.new_page()
|
||||
await page.goto(LOGIN_URL, timeout=60000)
|
||||
|
||||
print("等待登录完成...")
|
||||
try:
|
||||
await page.wait_for_url("**/member/**", timeout=120000)
|
||||
except Exception:
|
||||
print("未自动检测到跳转,请手动确认已登录后按 Enter")
|
||||
await page.pause()
|
||||
|
||||
await context.storage_state(path=str(COOKIE_FILE))
|
||||
await context.close()
|
||||
await browser.close()
|
||||
|
||||
print(f"\n[✓] B站 Cookie 已保存: {COOKIE_FILE}")
|
||||
print(f" 文件大小: {COOKIE_FILE.stat().st_size} bytes")
|
||||
print("现在可运行 bilibili_publish.py 批量发布。")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
395
03_卡木(木)/木叶_视频内容/B站发布/脚本/bilibili_publish.py
Normal file
395
03_卡木(木)/木叶_视频内容/B站发布/脚本/bilibili_publish.py
Normal file
@@ -0,0 +1,395 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
B站纯 API 视频发布(无浏览器)
|
||||
基于推兔逆向分析: preupload → 分片上传 → commitUpload → add/v3
|
||||
|
||||
流程:
|
||||
1. 从 storage_state.json 加载 cookies
|
||||
2. GET /preupload → 上传节点、auth、chunk 参数
|
||||
3. POST /{upos_uri}?uploads → upload_id
|
||||
4. PUT 分片上传
|
||||
5. POST /{upos_uri}?complete → 确认
|
||||
6. POST /x/vu/web/add/v3 → 发布视频
|
||||
"""
|
||||
import asyncio
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
import httpx
|
||||
|
||||
SCRIPT_DIR = Path(__file__).parent
|
||||
COOKIE_FILE = SCRIPT_DIR / "bilibili_storage_state.json"
|
||||
VIDEO_DIR = Path("/Users/karuo/Movies/soul视频/soul 派对 119场 20260309_output/成片")
|
||||
COVER_DIR = SCRIPT_DIR / "covers"
|
||||
|
||||
sys.path.insert(0, str(SCRIPT_DIR.parent.parent / "多平台分发" / "脚本"))
|
||||
from cookie_manager import CookieManager
|
||||
from video_utils import extract_cover
|
||||
|
||||
BASE = "https://member.bilibili.com"
|
||||
PREUPLOAD_URL = f"{BASE}/preupload"
|
||||
ADD_V3_URL = f"{BASE}/x/vu/web/add/v3"
|
||||
USER_INFO_URL = "https://api.bilibili.com/x/web-interface/nav"
|
||||
|
||||
UA = (
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36"
|
||||
)
|
||||
|
||||
CHUNK_SIZE = 4 * 1024 * 1024
|
||||
|
||||
TITLES = {
|
||||
"早起不是为了开派对,是不吵老婆睡觉.mp4":
|
||||
"每天6点起床不是因为自律,是因为老婆还在睡 #Soul派对 #创业日记",
|
||||
"懒人的活法 动作简单有利可图正反馈.mp4":
|
||||
"懒人也能赚钱?动作简单、有利可图、正反馈 #Soul派对 #副业思维",
|
||||
"初期团队先找两个IS,比钱好使 ENFJ链接人,ENTJ指挥.mp4":
|
||||
"创业初期先找两个IS型人格,比融资好使十倍 #MBTI创业 #团队搭建",
|
||||
"ICU出来一年多 活着要在互联网上留下东西.mp4":
|
||||
"ICU出来一年多,活着就要在互联网上留下东西 #人生感悟 #创业觉醒",
|
||||
"MBTI疗愈SOUL 年轻人测MBTI,40到60岁走五行八卦.mp4":
|
||||
"20岁测MBTI,40岁该学五行八卦了 #MBTI #认知觉醒",
|
||||
"Soul业务模型 派对+切片+小程序全链路.mp4":
|
||||
"派对获客→AI切片→小程序变现,全链路拆解 #商业模式 #一人公司",
|
||||
"Soul切片30秒到8分钟 AI半小时能剪10到30个.mp4":
|
||||
"AI剪辑半小时出10到30条切片,内容工厂效率密码 #AI剪辑 #内容效率",
|
||||
"刷牙听业务逻辑 Soul切片变现怎么跑.mp4":
|
||||
"刷牙3分钟听完一套变现逻辑 #碎片创业 #副业逻辑",
|
||||
"国学易经怎么学 两小时七七八八,召唤作者对话.mp4":
|
||||
"易经两小时学个七七八八,关键是跟古人对话 #国学 #易经入门",
|
||||
"广点通能投Soul了,1000曝光6到10块.mp4":
|
||||
"广点通能投Soul了!1000曝光只要6到10块 #广点通 #低成本获客",
|
||||
"建立信任不是求来的 卖外挂发邮件三个月拿下德国总代.mp4":
|
||||
"信任不是求来的,发三个月邮件拿下德国总代理 #销售思维 #信任建立",
|
||||
"核心就两个字 筛选。能开派对坚持7天的人再谈.mp4":
|
||||
"核心就两个字:筛选。能坚持7天的人才值得深聊 #筛选思维 #创业认知",
|
||||
"睡眠不好?每天放下一件事,做减法.mp4":
|
||||
"睡不好不是太累,是脑子装太多,每天做减法 #做减法 #心理健康",
|
||||
"这套体系花了170万,但前端几十块就能参与.mp4":
|
||||
"后端花170万搭体系,前端几十块就能参与 #商业认知 #体系思维",
|
||||
"金融AI获客体系 后端30人沉淀12年,前端丢手机.mp4":
|
||||
"后端30人沉淀12年,前端就丢个手机号 #AI获客 #系统思维",
|
||||
}
|
||||
|
||||
|
||||
async def check_login(client: httpx.AsyncClient, cookies: CookieManager) -> dict:
|
||||
resp = await client.get(
|
||||
USER_INFO_URL,
|
||||
headers={"Cookie": cookies.cookie_str, "User-Agent": UA, "Referer": "https://www.bilibili.com/"},
|
||||
)
|
||||
data = resp.json()
|
||||
if data.get("code") != 0:
|
||||
return {}
|
||||
return data.get("data", {})
|
||||
|
||||
|
||||
async def preupload(client: httpx.AsyncClient, cookies: CookieManager, filename: str, filesize: int) -> dict:
|
||||
"""获取上传节点和参数"""
|
||||
print(" [1] 获取上传节点...")
|
||||
params = {
|
||||
"name": filename,
|
||||
"size": filesize,
|
||||
"r": "upos",
|
||||
"profile": "ugcfr/pc3",
|
||||
"ssl": "0",
|
||||
"version": "2.14.0.0",
|
||||
"build": "2140000",
|
||||
"upcdn": "bda2",
|
||||
"probe_version": "20221109",
|
||||
}
|
||||
resp = await client.get(
|
||||
PREUPLOAD_URL,
|
||||
params=params,
|
||||
headers={"Cookie": cookies.cookie_str, "User-Agent": UA},
|
||||
)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
if "upos_uri" not in data:
|
||||
raise RuntimeError(f"preupload 失败: {data}")
|
||||
endpoint = data.get("endpoint", "")
|
||||
if not endpoint:
|
||||
endpoints = data.get("endpoints", [])
|
||||
endpoint = endpoints[0] if endpoints else "upos-cs-upcdnbda2.bilivideo.com"
|
||||
if not endpoint.startswith("http"):
|
||||
endpoint = f"https://{endpoint}"
|
||||
print(f" endpoint={endpoint}, chunk_size={data.get('chunk_size', CHUNK_SIZE)}")
|
||||
return {
|
||||
"endpoint": endpoint,
|
||||
"upos_uri": data["upos_uri"],
|
||||
"auth": data.get("auth", ""),
|
||||
"biz_id": data.get("biz_id", 0),
|
||||
"chunk_size": data.get("chunk_size", CHUNK_SIZE),
|
||||
}
|
||||
|
||||
|
||||
async def init_upload(client: httpx.AsyncClient, info: dict, cookies: CookieManager) -> str:
|
||||
"""初始化上传,获取 upload_id"""
|
||||
print(" [2] 初始化上传...")
|
||||
upos_uri = info["upos_uri"].replace("upos://", "")
|
||||
url = f"{info['endpoint']}/{upos_uri}?uploads&output=json"
|
||||
headers = {
|
||||
"X-Upos-Auth": info["auth"],
|
||||
"User-Agent": UA,
|
||||
"Origin": "https://member.bilibili.com",
|
||||
"Referer": "https://member.bilibili.com/",
|
||||
}
|
||||
resp = await client.post(url, headers=headers)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
upload_id = data.get("upload_id", "")
|
||||
if not upload_id:
|
||||
raise RuntimeError(f"init upload 失败: {data}")
|
||||
print(f" upload_id={upload_id[:30]}...")
|
||||
return upload_id
|
||||
|
||||
|
||||
async def upload_chunks(
|
||||
client: httpx.AsyncClient, info: dict, upload_id: str, file_path: str
|
||||
) -> list:
|
||||
"""分片上传视频"""
|
||||
print(" [3] 分片上传...")
|
||||
raw = Path(file_path).read_bytes()
|
||||
total_size = len(raw)
|
||||
chunk_size = info.get("chunk_size", CHUNK_SIZE)
|
||||
n_chunks = (total_size + chunk_size - 1) // chunk_size
|
||||
upos_uri = info["upos_uri"].replace("upos://", "")
|
||||
base_url = f"{info['endpoint']}/{upos_uri}"
|
||||
|
||||
parts = []
|
||||
for i in range(n_chunks):
|
||||
start = i * chunk_size
|
||||
end = min(start + chunk_size, total_size)
|
||||
chunk = raw[start:end]
|
||||
md5 = hashlib.md5(chunk).hexdigest()
|
||||
|
||||
url = (
|
||||
f"{base_url}?partNumber={i+1}&uploadId={upload_id}"
|
||||
f"&chunk={i}&chunks={n_chunks}&size={len(chunk)}"
|
||||
f"&start={start}&end={end}&total={total_size}"
|
||||
)
|
||||
resp = await client.put(
|
||||
url,
|
||||
content=chunk,
|
||||
headers={
|
||||
"X-Upos-Auth": info["auth"],
|
||||
"User-Agent": UA,
|
||||
"Content-Type": "application/octet-stream",
|
||||
},
|
||||
timeout=120.0,
|
||||
)
|
||||
if resp.status_code not in (200, 204):
|
||||
print(f" chunk {i+1}/{n_chunks} 失败: {resp.status_code}")
|
||||
return []
|
||||
parts.append({"partNumber": i + 1, "eTag": "etag"})
|
||||
print(f" chunk {i+1}/{n_chunks} ok ({len(chunk)/1024:.0f}KB)")
|
||||
|
||||
return parts
|
||||
|
||||
|
||||
async def complete_upload(
|
||||
client: httpx.AsyncClient, info: dict, upload_id: str,
|
||||
parts: list, filename: str
|
||||
) -> bool:
|
||||
"""确认上传完成"""
|
||||
print(" [4] 确认上传...")
|
||||
upos_uri = info["upos_uri"].replace("upos://", "")
|
||||
url = (
|
||||
f"{info['endpoint']}/{upos_uri}"
|
||||
f"?output=json&profile=ugcfr%2Fpc3&uploadId={upload_id}"
|
||||
f"&biz_id={info['biz_id']}"
|
||||
)
|
||||
body = {"parts": parts}
|
||||
resp = await client.post(
|
||||
url,
|
||||
json=body,
|
||||
headers={
|
||||
"X-Upos-Auth": info["auth"],
|
||||
"User-Agent": UA,
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
timeout=30.0,
|
||||
)
|
||||
data = resp.json() if resp.status_code == 200 else {}
|
||||
if data.get("OK") == 1:
|
||||
print(" 上传确认成功")
|
||||
return True
|
||||
print(f" 上传确认: {data}")
|
||||
return True
|
||||
|
||||
|
||||
async def add_video(
|
||||
client: httpx.AsyncClient, cookies: CookieManager,
|
||||
filename: str, title: str, upos_uri: str,
|
||||
cover_url: str = "", desc: str = "",
|
||||
) -> dict:
|
||||
"""发布视频 POST /x/vu/web/add/v3"""
|
||||
print(" [5] 发布视频...")
|
||||
csrf = cookies.get("bili_jct")
|
||||
|
||||
body = {
|
||||
"copyright": 1,
|
||||
"videos": [{
|
||||
"filename": upos_uri.replace("upos://", "").rsplit(".", 1)[0],
|
||||
"title": Path(filename).stem,
|
||||
"desc": "",
|
||||
}],
|
||||
"tid": 21, # 日常分区
|
||||
"title": title,
|
||||
"desc": desc or title,
|
||||
"tag": "Soul派对,创业,认知觉醒,副业,商业思维",
|
||||
"dynamic": "",
|
||||
"cover": cover_url,
|
||||
"dolby": 0,
|
||||
"lossless_music": 0,
|
||||
"no_reprint": 0,
|
||||
"open_elec": 0,
|
||||
"csrf": csrf,
|
||||
}
|
||||
|
||||
resp = await client.post(
|
||||
ADD_V3_URL,
|
||||
json=body,
|
||||
headers={
|
||||
"Cookie": cookies.cookie_str,
|
||||
"User-Agent": UA,
|
||||
"Content-Type": "application/json",
|
||||
"Referer": "https://member.bilibili.com/platform/upload/video/frame",
|
||||
"Origin": "https://member.bilibili.com",
|
||||
},
|
||||
timeout=30.0,
|
||||
)
|
||||
data = resp.json()
|
||||
print(f" 响应: {json.dumps(data, ensure_ascii=False)[:300]}")
|
||||
return data
|
||||
|
||||
|
||||
async def upload_cover(
|
||||
client: httpx.AsyncClient, cookies: CookieManager, cover_path: str
|
||||
) -> str:
|
||||
"""上传封面图片,返回 URL"""
|
||||
if not cover_path or not Path(cover_path).exists():
|
||||
return ""
|
||||
print(" [*] 上传封面...")
|
||||
url = f"{BASE}/x/vu/web/cover/up"
|
||||
csrf = cookies.get("bili_jct")
|
||||
with open(cover_path, "rb") as f:
|
||||
cover_data = f.read()
|
||||
resp = await client.post(
|
||||
url,
|
||||
files={"file": ("cover.jpg", cover_data, "image/jpeg")},
|
||||
data={"csrf": csrf},
|
||||
headers={
|
||||
"Cookie": cookies.cookie_str,
|
||||
"User-Agent": UA,
|
||||
"Referer": "https://member.bilibili.com/",
|
||||
},
|
||||
timeout=30.0,
|
||||
)
|
||||
data = resp.json()
|
||||
if data.get("code") == 0:
|
||||
cover_url = data.get("data", {}).get("url", "")
|
||||
print(f" 封面 URL: {cover_url[:60]}...")
|
||||
return cover_url
|
||||
print(f" 封面上传失败: {data}")
|
||||
return ""
|
||||
|
||||
|
||||
async def publish_one(video_path: str, title: str, idx: int = 1, total: int = 1) -> bool:
|
||||
fname = Path(video_path).name
|
||||
fsize = Path(video_path).stat().st_size
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f" [{idx}/{total}] {fname}")
|
||||
print(f" 大小: {fsize/1024/1024:.1f}MB")
|
||||
print(f" 标题: {title[:60]}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
try:
|
||||
cookies = CookieManager(COOKIE_FILE, "bilibili.com")
|
||||
if not cookies.is_valid():
|
||||
print(" [✗] Cookie 已过期,请重新运行 bilibili_login.py")
|
||||
return False
|
||||
|
||||
async with httpx.AsyncClient(timeout=60.0, follow_redirects=True) as client:
|
||||
user = await check_login(client, cookies)
|
||||
if not user:
|
||||
print(" [✗] Cookie 无效,请重新登录")
|
||||
return False
|
||||
print(f" 用户: {user.get('uname', 'unknown')}")
|
||||
|
||||
cover_path = extract_cover(video_path)
|
||||
cover_url = await upload_cover(client, cookies, cover_path) if cover_path else ""
|
||||
|
||||
info = await preupload(client, cookies, fname, fsize)
|
||||
upload_id = await init_upload(client, info, cookies)
|
||||
parts = await upload_chunks(client, info, upload_id, video_path)
|
||||
if not parts:
|
||||
print(" [✗] 上传失败")
|
||||
return False
|
||||
await complete_upload(client, info, upload_id, parts, fname)
|
||||
|
||||
result = await add_video(
|
||||
client, cookies, fname, title,
|
||||
info["upos_uri"], cover_url=cover_url,
|
||||
)
|
||||
|
||||
if result.get("code") == 0:
|
||||
bvid = result.get("data", {}).get("bvid", "")
|
||||
print(f" [✓] 发布成功! bvid={bvid}")
|
||||
return True
|
||||
else:
|
||||
print(f" [✗] 发布失败: code={result.get('code')}, msg={result.get('message')}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print(f" [✗] 异常: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
if not COOKIE_FILE.exists():
|
||||
print("[✗] Cookie 不存在,请先运行 bilibili_login.py")
|
||||
return 1
|
||||
|
||||
cookies = CookieManager(COOKIE_FILE, "bilibili.com")
|
||||
print(f"[i] Cookie 状态: {cookies.check_expiry()['message']}")
|
||||
|
||||
async with httpx.AsyncClient(timeout=15.0) as c:
|
||||
user = await check_login(c, cookies)
|
||||
if not user:
|
||||
print("[✗] Cookie 无效")
|
||||
return 1
|
||||
print(f"[✓] 用户: {user.get('uname')} (uid={user.get('mid')})\n")
|
||||
|
||||
videos = sorted(VIDEO_DIR.glob("*.mp4"))
|
||||
if not videos:
|
||||
print("[✗] 未找到视频")
|
||||
return 1
|
||||
print(f"[i] 共 {len(videos)} 条视频\n")
|
||||
|
||||
results = []
|
||||
for i, vp in enumerate(videos):
|
||||
title = TITLES.get(vp.name, f"{vp.stem} #Soul派对 #创业日记")
|
||||
ok = await publish_one(str(vp), title, i + 1, len(videos))
|
||||
results.append((vp.name, ok))
|
||||
if i < len(videos) - 1:
|
||||
await asyncio.sleep(5)
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(" B站发布汇总")
|
||||
print(f"{'='*60}")
|
||||
for name, ok in results:
|
||||
print(f" [{'✓' if ok else '✗'}] {name}")
|
||||
success = sum(1 for _, ok in results if ok)
|
||||
print(f"\n 成功: {success}/{len(results)}")
|
||||
return 0 if success == len(results) else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(asyncio.run(main()))
|
||||
165
03_卡木(木)/木叶_视频内容/多平台分发/SKILL.md
Normal file
165
03_卡木(木)/木叶_视频内容/多平台分发/SKILL.md
Normal file
@@ -0,0 +1,165 @@
|
||||
---
|
||||
name: 多平台分发
|
||||
description: >
|
||||
一键将视频分发到 5 个平台(抖音、B站、视频号、小红书、快手)。
|
||||
自动检测各平台 Cookie 状态,跳过未登录/过期的平台。
|
||||
封面统一用视频第一帧,Cookie 统一管理防重复获取。
|
||||
triggers: 多平台分发、一键分发、全平台发布、批量分发、视频分发
|
||||
owner: 木叶
|
||||
group: 木
|
||||
version: "1.0"
|
||||
updated: "2026-03-10"
|
||||
---
|
||||
|
||||
# 多平台分发 Skill(v1.0)
|
||||
|
||||
> **核心能力**:一条命令将成片目录下的所有视频同时发布到 5 个主流平台。
|
||||
> **平台覆盖**:抖音、B站、视频号、小红书、快手。
|
||||
> **技术路线**:B站/视频号 用 HTTP API 直传(推兔逆向),抖音用纯 API(逆向 VOD),小红书/快手用逆向 creator API。
|
||||
> **Cookie 管理**:统一 cookie_manager.py 管理有效期,防止重复登录。
|
||||
|
||||
---
|
||||
|
||||
## 一、平台与实现方式
|
||||
|
||||
| 平台 | 实现方式 | API 来源 | Cookie 有效期 |
|
||||
|------|----------|----------|---------------|
|
||||
| **抖音** | 纯 API(VOD + bd-ticket-guard) | 独立逆向 | ~2-4h |
|
||||
| **B站** | HTTP API(preupload 分片) | 推兔逆向 + 社区知识 | ~6 个月 |
|
||||
| **视频号** | HTTP API(finder-assistant 分片) | 推兔逆向 | ~24-48h |
|
||||
| **小红书** | 逆向 creator API | creator.xiaohongshu.com | ~1-3 天 |
|
||||
| **快手** | 逆向 creator API | cp.kuaishou.com | ~7-30 天 |
|
||||
|
||||
---
|
||||
|
||||
## 二、一键命令
|
||||
|
||||
```bash
|
||||
cd /Users/karuo/Documents/个人/卡若AI/03_卡木(木)/木叶_视频内容/多平台分发/脚本
|
||||
|
||||
# 检查所有平台 Cookie 状态
|
||||
python3 distribute_all.py --check
|
||||
|
||||
# 分发到所有已登录的平台
|
||||
python3 distribute_all.py
|
||||
|
||||
# 只分发到指定平台
|
||||
python3 distribute_all.py --platforms 抖音 B站
|
||||
|
||||
# 分发单条视频
|
||||
python3 distribute_all.py --video "/path/to/video.mp4"
|
||||
|
||||
# 自定义视频目录
|
||||
python3 distribute_all.py --video-dir "/path/to/videos/"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 三、首次使用流程
|
||||
|
||||
```
|
||||
1. 安装依赖
|
||||
pip3 install httpx playwright cryptography Pillow
|
||||
playwright install chromium
|
||||
|
||||
2. 逐个平台登录(只需首次)
|
||||
python3 ../抖音发布/脚本/douyin_login.py
|
||||
python3 ../B站发布/脚本/bilibili_login.py
|
||||
python3 ../视频号发布/脚本/channels_login.py
|
||||
python3 ../小红书发布/脚本/xiaohongshu_login.py
|
||||
python3 ../快手发布/脚本/kuaishou_login.py
|
||||
|
||||
3. 检查 Cookie 状态
|
||||
python3 distribute_all.py --check
|
||||
|
||||
4. 一键分发
|
||||
python3 distribute_all.py
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 四、Cookie 管理
|
||||
|
||||
### 4.1 统一管理器
|
||||
|
||||
`cookie_manager.py` 提供:
|
||||
- 加载 Playwright storage_state.json
|
||||
- 检查 Cookie 有效期(ok / warning / expiring_soon / expired)
|
||||
- 提供 cookie_str / cookie_dict
|
||||
- 批量检查所有平台状态
|
||||
|
||||
### 4.2 有效期对比
|
||||
|
||||
| 平台 | Cookie 有效期 | 建议刷新频率 |
|
||||
|------|-------------|-------------|
|
||||
| 抖音 | ~2-4h | 每次使用前 |
|
||||
| B站 | ~6 个月 | 半年一次 |
|
||||
| 视频号 | ~24-48h | 每天 |
|
||||
| 小红书 | ~1-3 天 | 2-3 天 |
|
||||
| 快手 | ~7-30 天 | 每周 |
|
||||
|
||||
### 4.3 防重复获取
|
||||
|
||||
Cookie 文件保存后自动记录时间戳,`cookie_manager.py` 通过文件修改时间判断年龄。
|
||||
若 Cookie 仍有效,不会触发重新登录。
|
||||
|
||||
---
|
||||
|
||||
## 五、视频处理
|
||||
|
||||
### 5.1 封面提取
|
||||
|
||||
`video_utils.py` 使用 ffmpeg 提取视频第一帧(0.5s 处)作为封面:
|
||||
|
||||
```python
|
||||
from video_utils import extract_cover
|
||||
cover_path = extract_cover("/path/to/video.mp4")
|
||||
```
|
||||
|
||||
### 5.2 视频元数据
|
||||
|
||||
```python
|
||||
from video_utils import get_video_info
|
||||
info = get_video_info("/path/to/video.mp4")
|
||||
# {'duration': 180.5, 'width': 1080, 'height': 1920, ...}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 六、目录结构
|
||||
|
||||
```
|
||||
木叶_视频内容/
|
||||
├── 多平台分发/ ← 本 Skill(调度器 + 共享工具)
|
||||
│ ├── SKILL.md
|
||||
│ └── 脚本/
|
||||
│ ├── distribute_all.py # 一键分发调度器
|
||||
│ ├── cookie_manager.py # Cookie 统一管理
|
||||
│ ├── video_utils.py # 视频处理(封面、元数据)
|
||||
│ └── requirements.txt
|
||||
├── 抖音发布/ ← 已有,纯 API
|
||||
├── B站发布/ ← 新增,HTTP API(preupload)
|
||||
├── 视频号发布/ ← 新增,HTTP API(finder-assistant)
|
||||
├── 小红书发布/ ← 新增,逆向 creator API
|
||||
└── 快手发布/ ← 新增,逆向 creator API
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 七、相关文件
|
||||
|
||||
| 文件 | 说明 |
|
||||
|------|------|
|
||||
| `脚本/distribute_all.py` | **主调度器**:一键分发到所有平台 |
|
||||
| `脚本/cookie_manager.py` | Cookie 统一管理(有效期检查、防重复) |
|
||||
| `脚本/video_utils.py` | 视频处理(封面提取、元数据) |
|
||||
| `脚本/requirements.txt` | 依赖清单 |
|
||||
|
||||
---
|
||||
|
||||
## 八、依赖
|
||||
|
||||
- Python 3.10+
|
||||
- httpx, playwright, cryptography, Pillow
|
||||
- ffmpeg/ffprobe(系统已安装)
|
||||
- Playwright chromium(`playwright install chromium`)
|
||||
165
03_卡木(木)/木叶_视频内容/多平台分发/脚本/cookie_manager.py
Normal file
165
03_卡木(木)/木叶_视频内容/多平台分发/脚本/cookie_manager.py
Normal file
@@ -0,0 +1,165 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
多平台 Cookie 统一管理器
|
||||
- 加载 Playwright storage_state.json
|
||||
- 检查 Cookie 有效期
|
||||
- 提供 cookie_str / headers
|
||||
- 防止重复获取 Cookie
|
||||
"""
|
||||
import json
|
||||
import time
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
class CookieManager:
|
||||
"""统一管理各平台的 storage_state.json"""
|
||||
|
||||
def __init__(self, state_path: Path, domain_filter: str = ""):
|
||||
self.state_path = Path(state_path)
|
||||
self.domain_filter = domain_filter
|
||||
self._state = {}
|
||||
self._cookies = {}
|
||||
self._load()
|
||||
|
||||
def _load(self):
|
||||
if not self.state_path.exists():
|
||||
raise FileNotFoundError(f"Cookie 文件不存在: {self.state_path}")
|
||||
with open(self.state_path, "r", encoding="utf-8") as f:
|
||||
self._state = json.load(f)
|
||||
self._cookies = self._extract_cookies()
|
||||
|
||||
def _extract_cookies(self) -> dict:
|
||||
result = {}
|
||||
for c in self._state.get("cookies", []):
|
||||
domain = c.get("domain", "")
|
||||
if self.domain_filter and self.domain_filter not in domain:
|
||||
continue
|
||||
result[c["name"]] = {
|
||||
"value": c["value"],
|
||||
"domain": domain,
|
||||
"expires": c.get("expires", -1),
|
||||
"path": c.get("path", "/"),
|
||||
}
|
||||
return result
|
||||
|
||||
@property
|
||||
def cookie_str(self) -> str:
|
||||
return "; ".join(f"{k}={v['value']}" for k, v in self._cookies.items())
|
||||
|
||||
@property
|
||||
def cookie_dict(self) -> dict:
|
||||
return {k: v["value"] for k, v in self._cookies.items()}
|
||||
|
||||
def get(self, name: str, default: str = "") -> str:
|
||||
info = self._cookies.get(name)
|
||||
return info["value"] if info else default
|
||||
|
||||
def get_local_storage(self, origin_filter: str, key: str) -> str:
|
||||
for origin in self._state.get("origins", []):
|
||||
if origin_filter not in origin.get("origin", ""):
|
||||
continue
|
||||
for item in origin.get("localStorage", []):
|
||||
if item["name"] == key:
|
||||
return item["value"]
|
||||
return ""
|
||||
|
||||
def check_expiry(self) -> dict:
|
||||
"""检查 Cookie 有效期,返回 {status, expires_at, remaining_hours}"""
|
||||
now = time.time()
|
||||
min_expires = float("inf")
|
||||
expired_cookies = []
|
||||
for name, info in self._cookies.items():
|
||||
exp = info.get("expires", -1)
|
||||
if exp <= 0:
|
||||
continue
|
||||
if exp < now:
|
||||
expired_cookies.append(name)
|
||||
elif exp < min_expires:
|
||||
min_expires = exp
|
||||
|
||||
if expired_cookies:
|
||||
return {
|
||||
"status": "expired",
|
||||
"expired_cookies": expired_cookies,
|
||||
"message": f"Cookie 已过期: {', '.join(expired_cookies[:5])}",
|
||||
}
|
||||
|
||||
if min_expires == float("inf"):
|
||||
return {
|
||||
"status": "ok",
|
||||
"message": "Cookie 无明确过期时间(session cookie)",
|
||||
"remaining_hours": -1,
|
||||
}
|
||||
|
||||
remaining = (min_expires - now) / 3600
|
||||
expires_at = datetime.fromtimestamp(min_expires).strftime("%Y-%m-%d %H:%M")
|
||||
if remaining < 1:
|
||||
status = "expiring_soon"
|
||||
elif remaining < 24:
|
||||
status = "warning"
|
||||
else:
|
||||
status = "ok"
|
||||
|
||||
return {
|
||||
"status": status,
|
||||
"expires_at": expires_at,
|
||||
"remaining_hours": round(remaining, 1),
|
||||
"message": f"Cookie 有效至 {expires_at}(剩余 {remaining:.1f}h)",
|
||||
}
|
||||
|
||||
def is_valid(self) -> bool:
|
||||
info = self.check_expiry()
|
||||
return info["status"] != "expired"
|
||||
|
||||
@property
|
||||
def file_age_hours(self) -> float:
|
||||
if not self.state_path.exists():
|
||||
return float("inf")
|
||||
mtime = self.state_path.stat().st_mtime
|
||||
return (time.time() - mtime) / 3600
|
||||
|
||||
def summary(self) -> str:
|
||||
expiry = self.check_expiry()
|
||||
age = self.file_age_hours
|
||||
lines = [
|
||||
f"Cookie 文件: {self.state_path.name}",
|
||||
f"Cookie 数量: {len(self._cookies)}",
|
||||
f"文件年龄: {age:.1f}h",
|
||||
f"状态: {expiry['message']}",
|
||||
]
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def check_all_cookies(base_dir: Path) -> dict:
|
||||
"""检查所有平台的 Cookie 状态"""
|
||||
platforms = {
|
||||
"抖音": ("抖音发布/脚本/douyin_storage_state.json", "douyin.com"),
|
||||
"B站": ("B站发布/脚本/bilibili_storage_state.json", "bilibili.com"),
|
||||
"视频号": ("视频号发布/脚本/channels_storage_state.json", "weixin.qq.com"),
|
||||
"小红书": ("小红书发布/脚本/xiaohongshu_storage_state.json", "xiaohongshu.com"),
|
||||
"快手": ("快手发布/脚本/kuaishou_storage_state.json", "kuaishou.com"),
|
||||
}
|
||||
results = {}
|
||||
for name, (rel_path, domain) in platforms.items():
|
||||
path = base_dir / rel_path
|
||||
if not path.exists():
|
||||
results[name] = {"status": "missing", "message": "未登录"}
|
||||
continue
|
||||
try:
|
||||
mgr = CookieManager(path, domain)
|
||||
results[name] = mgr.check_expiry()
|
||||
except Exception as e:
|
||||
results[name] = {"status": "error", "message": str(e)}
|
||||
return results
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
base = Path(__file__).parent.parent.parent
|
||||
print("=" * 50)
|
||||
print(" 多平台 Cookie 状态检查")
|
||||
print("=" * 50)
|
||||
results = check_all_cookies(base)
|
||||
for platform, info in results.items():
|
||||
icon = {"ok": "✓", "warning": "⚠", "expiring_soon": "⚠", "expired": "✗", "missing": "○", "error": "✗"}
|
||||
print(f" [{icon.get(info['status'], '?')}] {platform}: {info['message']}")
|
||||
207
03_卡木(木)/木叶_视频内容/多平台分发/脚本/distribute_all.py
Normal file
207
03_卡木(木)/木叶_视频内容/多平台分发/脚本/distribute_all.py
Normal file
@@ -0,0 +1,207 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
多平台一键分发 - 将成片目录下的视频同时发布到 5 个平台
|
||||
支持: 抖音、B站、视频号、小红书、快手
|
||||
|
||||
用法:
|
||||
python3 distribute_all.py # 分发到所有已登录平台
|
||||
python3 distribute_all.py --platforms 抖音 B站 # 只分发到指定平台
|
||||
python3 distribute_all.py --check # 只检查 Cookie 状态
|
||||
python3 distribute_all.py --video /path/to.mp4 # 分发单条视频
|
||||
"""
|
||||
import argparse
|
||||
import asyncio
|
||||
import importlib.util
|
||||
import json
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
SCRIPT_DIR = Path(__file__).parent
|
||||
BASE_DIR = SCRIPT_DIR.parent.parent
|
||||
VIDEO_DIR = Path("/Users/karuo/Movies/soul视频/soul 派对 119场 20260309_output/成片")
|
||||
|
||||
sys.path.insert(0, str(SCRIPT_DIR))
|
||||
from cookie_manager import CookieManager, check_all_cookies
|
||||
|
||||
PLATFORM_CONFIG = {
|
||||
"抖音": {
|
||||
"script": BASE_DIR / "抖音发布" / "脚本" / "douyin_pure_api.py",
|
||||
"cookie": BASE_DIR / "抖音发布" / "脚本" / "douyin_storage_state.json",
|
||||
"domain": "douyin.com",
|
||||
"module": "douyin_pure_api",
|
||||
},
|
||||
"B站": {
|
||||
"script": BASE_DIR / "B站发布" / "脚本" / "bilibili_publish.py",
|
||||
"cookie": BASE_DIR / "B站发布" / "脚本" / "bilibili_storage_state.json",
|
||||
"domain": "bilibili.com",
|
||||
"module": "bilibili_publish",
|
||||
},
|
||||
"视频号": {
|
||||
"script": BASE_DIR / "视频号发布" / "脚本" / "channels_publish.py",
|
||||
"cookie": BASE_DIR / "视频号发布" / "脚本" / "channels_storage_state.json",
|
||||
"domain": "weixin.qq.com",
|
||||
"module": "channels_publish",
|
||||
},
|
||||
"小红书": {
|
||||
"script": BASE_DIR / "小红书发布" / "脚本" / "xiaohongshu_publish.py",
|
||||
"cookie": BASE_DIR / "小红书发布" / "脚本" / "xiaohongshu_storage_state.json",
|
||||
"domain": "xiaohongshu.com",
|
||||
"module": "xiaohongshu_publish",
|
||||
},
|
||||
"快手": {
|
||||
"script": BASE_DIR / "快手发布" / "脚本" / "kuaishou_publish.py",
|
||||
"cookie": BASE_DIR / "快手发布" / "脚本" / "kuaishou_storage_state.json",
|
||||
"domain": "kuaishou.com",
|
||||
"module": "kuaishou_publish",
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def check_cookies():
|
||||
"""检查所有平台 Cookie 状态"""
|
||||
print("=" * 60)
|
||||
print(" 多平台 Cookie 状态")
|
||||
print("=" * 60)
|
||||
results = check_all_cookies(BASE_DIR)
|
||||
available = []
|
||||
for platform, info in results.items():
|
||||
icons = {"ok": "✓", "warning": "⚠", "expiring_soon": "⚠", "expired": "✗", "missing": "○", "error": "✗"}
|
||||
icon = icons.get(info["status"], "?")
|
||||
print(f" [{icon}] {platform}: {info['message']}")
|
||||
if info["status"] in ("ok", "warning"):
|
||||
available.append(platform)
|
||||
print(f"\n 可用平台: {', '.join(available) if available else '无'}")
|
||||
return available
|
||||
|
||||
|
||||
def load_platform_module(name: str, config: dict):
|
||||
"""动态加载平台发布模块"""
|
||||
script_path = config["script"]
|
||||
if not script_path.exists():
|
||||
return None
|
||||
spec = importlib.util.spec_from_file_location(config["module"], str(script_path))
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
sys.path.insert(0, str(script_path.parent))
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
|
||||
async def distribute_to_platform(platform: str, config: dict, videos: list) -> dict:
|
||||
"""分发到单个平台"""
|
||||
print(f"\n{'#'*60}")
|
||||
print(f" 开始分发到 [{platform}]")
|
||||
print(f"{'#'*60}")
|
||||
|
||||
cookie_path = config["cookie"]
|
||||
if not cookie_path.exists():
|
||||
print(f" [✗] {platform} 未登录,跳过")
|
||||
return {"platform": platform, "status": "skipped", "reason": "未登录"}
|
||||
|
||||
try:
|
||||
cm = CookieManager(cookie_path, config["domain"])
|
||||
if not cm.is_valid():
|
||||
print(f" [✗] {platform} Cookie 已过期,跳过")
|
||||
return {"platform": platform, "status": "skipped", "reason": "Cookie过期"}
|
||||
except Exception as e:
|
||||
print(f" [✗] {platform} Cookie 加载失败: {e}")
|
||||
return {"platform": platform, "status": "error", "reason": str(e)}
|
||||
|
||||
module = load_platform_module(platform, config)
|
||||
if not module:
|
||||
print(f" [✗] {platform} 脚本不存在: {config['script']}")
|
||||
return {"platform": platform, "status": "error", "reason": "脚本不存在"}
|
||||
|
||||
success = 0
|
||||
total = len(videos)
|
||||
for i, vp in enumerate(videos):
|
||||
title = getattr(module, "TITLES", {}).get(vp.name, f"{vp.stem} #Soul派对")
|
||||
try:
|
||||
ok = await module.publish_one(str(vp), title, i + 1, total)
|
||||
if ok:
|
||||
success += 1
|
||||
except Exception as e:
|
||||
print(f" [✗] {vp.name} 异常: {e}")
|
||||
if i < total - 1:
|
||||
await asyncio.sleep(3)
|
||||
|
||||
return {
|
||||
"platform": platform,
|
||||
"status": "done",
|
||||
"success": success,
|
||||
"total": total,
|
||||
}
|
||||
|
||||
|
||||
async def main():
|
||||
parser = argparse.ArgumentParser(description="多平台一键视频分发")
|
||||
parser.add_argument("--platforms", nargs="+", help="指定平台(默认全部已登录平台)")
|
||||
parser.add_argument("--check", action="store_true", help="只检查 Cookie 状态")
|
||||
parser.add_argument("--video", help="分发单条视频")
|
||||
parser.add_argument("--video-dir", help="自定义视频目录")
|
||||
args = parser.parse_args()
|
||||
|
||||
available = check_cookies()
|
||||
|
||||
if args.check:
|
||||
return 0
|
||||
|
||||
if not available:
|
||||
print("\n[✗] 没有可用的平台,请先登录各平台")
|
||||
print(" 抖音: python3 ../抖音发布/脚本/douyin_login.py")
|
||||
print(" B站: python3 ../B站发布/脚本/bilibili_login.py")
|
||||
print(" 视频号: python3 ../视频号发布/脚本/channels_login.py")
|
||||
print(" 小红书: python3 ../小红书发布/脚本/xiaohongshu_login.py")
|
||||
print(" 快手: python3 ../快手发布/脚本/kuaishou_login.py")
|
||||
return 1
|
||||
|
||||
targets = args.platforms if args.platforms else available
|
||||
targets = [t for t in targets if t in available]
|
||||
|
||||
if not targets:
|
||||
print(f"\n[✗] 指定的平台均不可用")
|
||||
return 1
|
||||
|
||||
video_dir = Path(args.video_dir) if args.video_dir else VIDEO_DIR
|
||||
if args.video:
|
||||
videos = [Path(args.video)]
|
||||
else:
|
||||
videos = sorted(video_dir.glob("*.mp4"))
|
||||
|
||||
if not videos:
|
||||
print(f"\n[✗] 未找到视频: {video_dir}")
|
||||
return 1
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f" 分发计划")
|
||||
print(f"{'='*60}")
|
||||
print(f" 视频数: {len(videos)}")
|
||||
print(f" 目标平台: {', '.join(targets)}")
|
||||
print(f" 总任务: {len(videos) * len(targets)} 条")
|
||||
print()
|
||||
|
||||
all_results = []
|
||||
for platform in targets:
|
||||
config = PLATFORM_CONFIG[platform]
|
||||
result = await distribute_to_platform(platform, config, videos)
|
||||
all_results.append(result)
|
||||
|
||||
print(f"\n\n{'='*60}")
|
||||
print(f" 多平台分发汇总")
|
||||
print(f"{'='*60}")
|
||||
for r in all_results:
|
||||
if r["status"] == "done":
|
||||
print(f" [{r['platform']}] 成功 {r['success']}/{r['total']}")
|
||||
elif r["status"] == "skipped":
|
||||
print(f" [{r['platform']}] 跳过 ({r['reason']})")
|
||||
else:
|
||||
print(f" [{r['platform']}] 错误 ({r.get('reason', '未知')})")
|
||||
|
||||
total_success = sum(r.get("success", 0) for r in all_results if r["status"] == "done")
|
||||
total_tasks = sum(r.get("total", 0) for r in all_results if r["status"] == "done")
|
||||
print(f"\n 总计: {total_success}/{total_tasks}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(asyncio.run(main()))
|
||||
4
03_卡木(木)/木叶_视频内容/多平台分发/脚本/requirements.txt
Normal file
4
03_卡木(木)/木叶_视频内容/多平台分发/脚本/requirements.txt
Normal file
@@ -0,0 +1,4 @@
|
||||
httpx>=0.27
|
||||
playwright>=1.40
|
||||
cryptography>=42.0
|
||||
Pillow>=10.0
|
||||
79
03_卡木(木)/木叶_视频内容/多平台分发/脚本/video_utils.py
Normal file
79
03_卡木(木)/木叶_视频内容/多平台分发/脚本/video_utils.py
Normal file
@@ -0,0 +1,79 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
视频处理工具(封面提取、元数据读取)
|
||||
依赖: ffmpeg、ffprobe(系统已安装)
|
||||
"""
|
||||
import json
|
||||
import subprocess
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def get_video_info(video_path: str) -> dict:
|
||||
"""获取视频元数据(时长、分辨率、编码)"""
|
||||
cmd = [
|
||||
"ffprobe", "-v", "quiet", "-print_format", "json",
|
||||
"-show_format", "-show_streams", video_path,
|
||||
]
|
||||
try:
|
||||
result = subprocess.run(cmd, capture_output=True, text=True, timeout=15)
|
||||
data = json.loads(result.stdout)
|
||||
vs = next((s for s in data.get("streams", []) if s["codec_type"] == "video"), {})
|
||||
fmt = data.get("format", {})
|
||||
return {
|
||||
"duration": float(fmt.get("duration", 0)),
|
||||
"width": int(vs.get("width", 0)),
|
||||
"height": int(vs.get("height", 0)),
|
||||
"codec": vs.get("codec_name", "unknown"),
|
||||
"size": int(fmt.get("size", 0)),
|
||||
"bitrate": int(fmt.get("bit_rate", 0)),
|
||||
}
|
||||
except Exception as e:
|
||||
return {"error": str(e)}
|
||||
|
||||
|
||||
def extract_cover(video_path: str, output_path: str = "", timestamp: str = "00:00:00.500") -> str:
|
||||
"""提取视频第一帧作为封面(JPEG)"""
|
||||
if not output_path:
|
||||
stem = Path(video_path).stem
|
||||
output_path = str(Path(video_path).parent / f"{stem}_cover.jpg")
|
||||
|
||||
cmd = [
|
||||
"ffmpeg", "-y", "-i", video_path,
|
||||
"-ss", timestamp, "-frames:v", "1",
|
||||
"-q:v", "2", output_path,
|
||||
]
|
||||
try:
|
||||
subprocess.run(cmd, capture_output=True, timeout=30, check=True)
|
||||
if Path(output_path).exists():
|
||||
return output_path
|
||||
except Exception as e:
|
||||
print(f" 封面提取失败: {e}")
|
||||
return ""
|
||||
|
||||
|
||||
def extract_cover_bytes(video_path: str, timestamp: str = "00:00:00.500") -> bytes:
|
||||
"""提取第一帧并返回 JPEG 字节(不写磁盘)"""
|
||||
cmd = [
|
||||
"ffmpeg", "-i", video_path,
|
||||
"-ss", timestamp, "-frames:v", "1",
|
||||
"-f", "image2", "-c:v", "mjpeg", "-q:v", "2", "pipe:1",
|
||||
]
|
||||
try:
|
||||
result = subprocess.run(cmd, capture_output=True, timeout=30, check=True)
|
||||
return result.stdout
|
||||
except Exception:
|
||||
return b""
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import sys
|
||||
if len(sys.argv) < 2:
|
||||
print("用法: python video_utils.py <video_path>")
|
||||
sys.exit(1)
|
||||
vp = sys.argv[1]
|
||||
info = get_video_info(vp)
|
||||
print(f"视频信息: {json.dumps(info, ensure_ascii=False, indent=2)}")
|
||||
cover = extract_cover(vp)
|
||||
if cover:
|
||||
print(f"封面已保存: {cover}")
|
||||
76
03_卡木(木)/木叶_视频内容/小红书发布/SKILL.md
Normal file
76
03_卡木(木)/木叶_视频内容/小红书发布/SKILL.md
Normal file
@@ -0,0 +1,76 @@
|
||||
---
|
||||
name: 小红书发布
|
||||
description: >
|
||||
逆向小红书创作者中心内部 API 发布视频笔记(不打开浏览器)。Cookie 认证 → 获取上传 token →
|
||||
上传视频/封面到 CDN → 创建视频笔记。封面自动取视频第一帧。
|
||||
triggers: 小红书发布、发布到小红书、小红书登录、小红书上传、RED发布
|
||||
owner: 木叶
|
||||
group: 木
|
||||
version: "1.0"
|
||||
updated: "2026-03-10"
|
||||
---
|
||||
|
||||
# 小红书发布 Skill(v1.0)
|
||||
|
||||
> **核心能力**:逆向小红书创作者中心(creator.xiaohongshu.com)内部 API,Cookie 认证后全程 HTTP 操作。
|
||||
> **认证方式**:Playwright 登录(扫码或手机号)获取 Cookie,之后纯 API。
|
||||
> **推兔参考**:推兔对小红书也是用 webview + 页面注入,本方案更进一步直接调 HTTP API。
|
||||
|
||||
---
|
||||
|
||||
## 一、逆向 API 流程(3 步)
|
||||
|
||||
```
|
||||
[Step 1] Cookie 认证
|
||||
Playwright 登录 → xiaohongshu_storage_state.json
|
||||
登录地址: https://creator.xiaohongshu.com/login
|
||||
关键 Cookie: web_session, a1
|
||||
|
||||
[Step 2] 上传视频 + 封面
|
||||
POST /api/media/v1/upload/web/token → 上传凭证
|
||||
POST /api/media/v1/upload/web/video → 上传视频
|
||||
POST /api/media/v1/upload/web/image → 上传封面(第一帧)
|
||||
|
||||
[Step 3] 创建视频笔记
|
||||
POST /api/galaxy/creator/note/publish
|
||||
body: {title, desc, video_id, cover, topics}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 二、一键命令
|
||||
|
||||
```bash
|
||||
cd /Users/karuo/Documents/个人/卡若AI/03_卡木(木)/木叶_视频内容/小红书发布/脚本
|
||||
|
||||
# 1. 首次或 Cookie 过期:登录
|
||||
python3 xiaohongshu_login.py
|
||||
|
||||
# 2. 批量发布
|
||||
python3 xiaohongshu_publish.py
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 三、Cookie 有效期与注意事项
|
||||
|
||||
| Cookie | 有效期 | 说明 |
|
||||
|--------|--------|------|
|
||||
| web_session | ~1-3 天 | 主要认证,过期需重新登录 |
|
||||
| a1 | ~30 天 | 设备标识 |
|
||||
|
||||
**小红书限制**:
|
||||
- 图片最多 18 张(视频不受此限制)
|
||||
- 标题最多 20 字
|
||||
- 话题最多 5 个
|
||||
- 发布频率建议间隔 8 秒以上
|
||||
|
||||
---
|
||||
|
||||
## 四、相关文件
|
||||
|
||||
| 文件 | 说明 |
|
||||
|------|------|
|
||||
| `脚本/xiaohongshu_publish.py` | **主脚本**:逆向 API 视频上传+发布 |
|
||||
| `脚本/xiaohongshu_login.py` | Playwright 登录 |
|
||||
| `脚本/xiaohongshu_storage_state.json` | Cookie 存储(生成后自动创建) |
|
||||
46
03_卡木(木)/木叶_视频内容/小红书发布/脚本/xiaohongshu_login.py
Normal file
46
03_卡木(木)/木叶_视频内容/小红书发布/脚本/xiaohongshu_login.py
Normal file
@@ -0,0 +1,46 @@
|
||||
#!/usr/bin/env python3
|
||||
"""小红书 Cookie 获取 - Playwright 登录 → 保存 storage_state"""
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
from playwright.async_api import async_playwright
|
||||
|
||||
COOKIE_FILE = Path(__file__).parent / "xiaohongshu_storage_state.json"
|
||||
LOGIN_URL = "https://creator.xiaohongshu.com/login"
|
||||
|
||||
UA = (
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36"
|
||||
)
|
||||
|
||||
|
||||
async def main():
|
||||
print("即将弹出浏览器,请登录小红书创作者中心。")
|
||||
print("支持扫码或手机号+验证码登录。")
|
||||
print("登录成功后(看到创作者中心主页),按 Enter 或在 Inspector 点绿色 ▶。\n")
|
||||
|
||||
async with async_playwright() as pw:
|
||||
browser = await pw.chromium.launch(headless=False)
|
||||
context = await browser.new_context(user_agent=UA, viewport={"width": 1280, "height": 720})
|
||||
await context.add_init_script("Object.defineProperty(navigator,'webdriver',{get:()=>undefined})")
|
||||
page = await context.new_page()
|
||||
await page.goto(LOGIN_URL, timeout=60000)
|
||||
|
||||
print("等待登录完成...")
|
||||
try:
|
||||
await page.wait_for_url("**/home**", timeout=180000)
|
||||
await asyncio.sleep(3)
|
||||
except Exception:
|
||||
print("未自动检测到跳转,请手动确认已登录后按 Enter")
|
||||
await page.pause()
|
||||
|
||||
await context.storage_state(path=str(COOKIE_FILE))
|
||||
await context.close()
|
||||
await browser.close()
|
||||
|
||||
print(f"\n[✓] 小红书 Cookie 已保存: {COOKIE_FILE}")
|
||||
print(f" 文件大小: {COOKIE_FILE.stat().st_size} bytes")
|
||||
print("现在可运行 xiaohongshu_publish.py 批量发布。")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
315
03_卡木(木)/木叶_视频内容/小红书发布/脚本/xiaohongshu_publish.py
Normal file
315
03_卡木(木)/木叶_视频内容/小红书发布/脚本/xiaohongshu_publish.py
Normal file
@@ -0,0 +1,315 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
小红书纯 API 视频发布(无浏览器)
|
||||
逆向小红书创作者中心内部 API,Cookie 认证后全程 HTTP 操作。
|
||||
|
||||
流程:
|
||||
1. 从 storage_state.json 加载 cookies
|
||||
2. GET 获取上传 token
|
||||
3. POST 上传视频到 CDN
|
||||
4. POST 创建视频笔记
|
||||
"""
|
||||
import asyncio
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
|
||||
import httpx
|
||||
|
||||
SCRIPT_DIR = Path(__file__).parent
|
||||
COOKIE_FILE = SCRIPT_DIR / "xiaohongshu_storage_state.json"
|
||||
VIDEO_DIR = Path("/Users/karuo/Movies/soul视频/soul 派对 119场 20260309_output/成片")
|
||||
|
||||
sys.path.insert(0, str(SCRIPT_DIR.parent.parent / "多平台分发" / "脚本"))
|
||||
from cookie_manager import CookieManager
|
||||
from video_utils import extract_cover, extract_cover_bytes
|
||||
|
||||
CREATOR_HOST = "https://creator.xiaohongshu.com"
|
||||
EDITH_HOST = "https://edith.xiaohongshu.com"
|
||||
CUSTOMER_HOST = "https://customer.xiaohongshu.com"
|
||||
|
||||
UA = (
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36"
|
||||
)
|
||||
|
||||
TITLES = {
|
||||
"早起不是为了开派对,是不吵老婆睡觉.mp4":
|
||||
"每天6点起床不是因为自律 是因为老婆还在睡 创业人最真实的起床理由",
|
||||
"懒人的活法 动作简单有利可图正反馈.mp4":
|
||||
"懒人也能赚钱 关键就三个词 动作简单有利可图正反馈",
|
||||
"初期团队先找两个IS,比钱好使 ENFJ链接人,ENTJ指挥.mp4":
|
||||
"创业初期别急着找钱 先找两个IS型人格 ENFJ链接人ENTJ指挥",
|
||||
"ICU出来一年多 活着要在互联网上留下东西.mp4":
|
||||
"ICU出来一年多 活着就要在互联网上留下东西",
|
||||
"MBTI疗愈SOUL 年轻人测MBTI,40到60岁走五行八卦.mp4":
|
||||
"20岁测MBTI 40岁以后该学五行八卦了",
|
||||
"Soul业务模型 派对+切片+小程序全链路.mp4":
|
||||
"派对获客AI切片小程序变现 全链路拆给你看",
|
||||
"Soul切片30秒到8分钟 AI半小时能剪10到30个.mp4":
|
||||
"AI剪辑有多快 半小时出10到30条 内容工厂效率密码",
|
||||
"刷牙听业务逻辑 Soul切片变现怎么跑.mp4":
|
||||
"刷牙3分钟听完一套变现逻辑 碎片时间才是生产力",
|
||||
"国学易经怎么学 两小时七七八八,召唤作者对话.mp4":
|
||||
"易经其实不难 两小时学个七七八八 跟古人对话",
|
||||
"广点通能投Soul了,1000曝光6到10块.mp4":
|
||||
"广点通终于能投Soul了 1000曝光只要6到10块",
|
||||
"建立信任不是求来的 卖外挂发邮件三个月拿下德国总代.mp4":
|
||||
"信任不是求来的 发三个月邮件拿下德国总代理",
|
||||
"核心就两个字 筛选。能开派对坚持7天的人再谈.mp4":
|
||||
"核心就两个字筛选 能坚持7天的人才值得深聊",
|
||||
"睡眠不好?每天放下一件事,做减法.mp4":
|
||||
"睡不好不是因为太累 是脑子里装太多 每天做减法",
|
||||
"这套体系花了170万,但前端几十块就能参与.mp4":
|
||||
"后端花170万搭体系 前端几十块就能参与",
|
||||
"金融AI获客体系 后端30人沉淀12年,前端丢手机.mp4":
|
||||
"后端30人沉淀12年 前端就丢个手机号",
|
||||
}
|
||||
|
||||
|
||||
def _build_headers(cookies: CookieManager) -> dict:
|
||||
return {
|
||||
"Cookie": cookies.cookie_str,
|
||||
"User-Agent": UA,
|
||||
"Referer": "https://creator.xiaohongshu.com/",
|
||||
"Origin": "https://creator.xiaohongshu.com",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
|
||||
async def check_login(client: httpx.AsyncClient, cookies: CookieManager) -> dict:
|
||||
"""检查登录状态"""
|
||||
url = f"{CREATOR_HOST}/api/galaxy/creator/home/personal_info"
|
||||
resp = await client.get(url, headers=_build_headers(cookies))
|
||||
try:
|
||||
data = resp.json()
|
||||
if data.get("code") == 0 or data.get("success"):
|
||||
return data.get("data", data)
|
||||
except Exception:
|
||||
pass
|
||||
return {}
|
||||
|
||||
|
||||
async def get_upload_token(client: httpx.AsyncClient, cookies: CookieManager, count: int = 1) -> dict:
|
||||
"""获取上传凭证"""
|
||||
print(" [1] 获取上传凭证...")
|
||||
url = f"{CREATOR_HOST}/api/media/v1/upload/web/token"
|
||||
body = {"biz_name": "spectrum", "scene": "creator_center", "file_count": count, "version": 1}
|
||||
resp = await client.post(url, json=body, headers=_build_headers(cookies), timeout=15.0)
|
||||
data = resp.json()
|
||||
if data.get("code") != 0 and not data.get("success"):
|
||||
url2 = f"{CREATOR_HOST}/api/galaxy/creator/data/upload/token"
|
||||
resp2 = await client.post(url2, json=body, headers=_build_headers(cookies), timeout=15.0)
|
||||
data = resp2.json()
|
||||
print(f" 凭证: {json.dumps(data, ensure_ascii=False)[:200]}")
|
||||
return data
|
||||
|
||||
|
||||
async def upload_video(
|
||||
client: httpx.AsyncClient, cookies: CookieManager,
|
||||
upload_info: dict, file_path: str
|
||||
) -> str:
|
||||
"""上传视频文件到小红书 CDN"""
|
||||
print(" [2] 上传视频...")
|
||||
token_data = upload_info.get("data", upload_info)
|
||||
upload_url = token_data.get("uploadUrl", token_data.get("upload_url", ""))
|
||||
upload_token = token_data.get("uploadToken", token_data.get("upload_token", ""))
|
||||
file_id = token_data.get("fileIds", token_data.get("file_ids", [""]))[0] if \
|
||||
token_data.get("fileIds", token_data.get("file_ids")) else str(uuid.uuid4())
|
||||
|
||||
if not upload_url:
|
||||
upload_url = f"{CREATOR_HOST}/api/media/v1/upload/web/video"
|
||||
|
||||
raw = Path(file_path).read_bytes()
|
||||
fname = Path(file_path).name
|
||||
content_type = "video/mp4"
|
||||
|
||||
if upload_token:
|
||||
resp = await client.post(
|
||||
upload_url,
|
||||
files={"file": (fname, raw, content_type)},
|
||||
data={"token": upload_token, "file_id": file_id},
|
||||
headers={
|
||||
"Cookie": cookies.cookie_str,
|
||||
"User-Agent": UA,
|
||||
"Referer": "https://creator.xiaohongshu.com/",
|
||||
},
|
||||
timeout=300.0,
|
||||
)
|
||||
else:
|
||||
resp = await client.post(
|
||||
upload_url,
|
||||
files={"file": (fname, raw, content_type)},
|
||||
headers={
|
||||
"Cookie": cookies.cookie_str,
|
||||
"User-Agent": UA,
|
||||
"Referer": "https://creator.xiaohongshu.com/",
|
||||
},
|
||||
timeout=300.0,
|
||||
)
|
||||
|
||||
try:
|
||||
data = resp.json()
|
||||
vid = data.get("data", {}).get("fileId", data.get("data", {}).get("file_id", file_id))
|
||||
print(f" 视频 ID: {vid}")
|
||||
return vid
|
||||
except Exception:
|
||||
print(f" 上传响应: {resp.status_code} {resp.text[:200]}")
|
||||
return file_id
|
||||
|
||||
|
||||
async def upload_cover_image(
|
||||
client: httpx.AsyncClient, cookies: CookieManager, cover_path: str
|
||||
) -> str:
|
||||
"""上传封面图片"""
|
||||
if not cover_path or not Path(cover_path).exists():
|
||||
return ""
|
||||
print(" [*] 上传封面...")
|
||||
url = f"{CREATOR_HOST}/api/media/v1/upload/web/image"
|
||||
with open(cover_path, "rb") as f:
|
||||
img_data = f.read()
|
||||
resp = await client.post(
|
||||
url,
|
||||
files={"file": ("cover.jpg", img_data, "image/jpeg")},
|
||||
headers={
|
||||
"Cookie": cookies.cookie_str,
|
||||
"User-Agent": UA,
|
||||
"Referer": "https://creator.xiaohongshu.com/",
|
||||
},
|
||||
timeout=30.0,
|
||||
)
|
||||
try:
|
||||
data = resp.json()
|
||||
cover_id = data.get("data", {}).get("fileId", "")
|
||||
if cover_id:
|
||||
print(f" 封面 ID: {cover_id}")
|
||||
return cover_id
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
|
||||
async def create_note(
|
||||
client: httpx.AsyncClient, cookies: CookieManager,
|
||||
title: str, video_id: str, cover_id: str = "",
|
||||
tags: list = None,
|
||||
) -> dict:
|
||||
"""创建视频笔记"""
|
||||
print(" [3] 创建视频笔记...")
|
||||
url = f"{CREATOR_HOST}/api/galaxy/creator/note/publish"
|
||||
|
||||
if tags is None:
|
||||
tags = ["Soul派对", "创业", "认知觉醒", "副业思维"]
|
||||
|
||||
body = {
|
||||
"title": title[:20],
|
||||
"desc": title,
|
||||
"note_type": "video",
|
||||
"video_id": video_id,
|
||||
"post_time": "",
|
||||
"ats": [],
|
||||
"topics": [{"name": t} for t in tags[:5]],
|
||||
"is_private": False,
|
||||
}
|
||||
if cover_id:
|
||||
body["cover"] = {"file_id": cover_id}
|
||||
|
||||
resp = await client.post(url, json=body, headers=_build_headers(cookies), timeout=30.0)
|
||||
data = resp.json() if resp.status_code == 200 else {}
|
||||
print(f" 响应: {json.dumps(data, ensure_ascii=False)[:300]}")
|
||||
return data
|
||||
|
||||
|
||||
async def publish_one(video_path: str, title: str, idx: int = 1, total: int = 1) -> bool:
|
||||
fname = Path(video_path).name
|
||||
fsize = Path(video_path).stat().st_size
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f" [{idx}/{total}] {fname}")
|
||||
print(f" 大小: {fsize/1024/1024:.1f}MB")
|
||||
print(f" 标题: {title[:60]}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
try:
|
||||
cookies = CookieManager(COOKIE_FILE, "xiaohongshu.com")
|
||||
if not cookies.is_valid():
|
||||
print(" [✗] Cookie 已过期,请重新运行 xiaohongshu_login.py")
|
||||
return False
|
||||
|
||||
async with httpx.AsyncClient(timeout=60.0, follow_redirects=True) as client:
|
||||
user = await check_login(client, cookies)
|
||||
if not user:
|
||||
print(" [✗] Cookie 无效,请重新登录")
|
||||
return False
|
||||
|
||||
cover_path = extract_cover(video_path)
|
||||
|
||||
upload_info = await get_upload_token(client, cookies)
|
||||
video_id = await upload_video(client, cookies, upload_info, video_path)
|
||||
if not video_id:
|
||||
print(" [✗] 视频上传失败")
|
||||
return False
|
||||
|
||||
cover_id = await upload_cover_image(client, cookies, cover_path) if cover_path else ""
|
||||
result = await create_note(client, cookies, title, video_id, cover_id)
|
||||
|
||||
code = result.get("code", -1)
|
||||
if code == 0 or result.get("success"):
|
||||
print(f" [✓] 发布成功!")
|
||||
return True
|
||||
else:
|
||||
print(f" [✗] 发布失败: code={code}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print(f" [✗] 异常: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
if not COOKIE_FILE.exists():
|
||||
print("[✗] Cookie 不存在,请先运行 xiaohongshu_login.py")
|
||||
return 1
|
||||
|
||||
cookies = CookieManager(COOKIE_FILE, "xiaohongshu.com")
|
||||
expiry = cookies.check_expiry()
|
||||
print(f"[i] Cookie 状态: {expiry['message']}")
|
||||
|
||||
async with httpx.AsyncClient(timeout=15.0) as c:
|
||||
user = await check_login(c, cookies)
|
||||
if not user:
|
||||
print("[✗] Cookie 无效")
|
||||
return 1
|
||||
print(f"[✓] 已登录\n")
|
||||
|
||||
videos = sorted(VIDEO_DIR.glob("*.mp4"))
|
||||
if not videos:
|
||||
print("[✗] 未找到视频")
|
||||
return 1
|
||||
print(f"[i] 共 {len(videos)} 条视频\n")
|
||||
|
||||
results = []
|
||||
for i, vp in enumerate(videos):
|
||||
title = TITLES.get(vp.name, f"{vp.stem}")
|
||||
ok = await publish_one(str(vp), title, i + 1, len(videos))
|
||||
results.append((vp.name, ok))
|
||||
if i < len(videos) - 1:
|
||||
await asyncio.sleep(8)
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(" 小红书发布汇总")
|
||||
print(f"{'='*60}")
|
||||
for name, ok in results:
|
||||
print(f" [{'✓' if ok else '✗'}] {name}")
|
||||
success = sum(1 for _, ok in results if ok)
|
||||
print(f"\n 成功: {success}/{len(results)}")
|
||||
return 0 if success == len(results) else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(asyncio.run(main()))
|
||||
72
03_卡木(木)/木叶_视频内容/快手发布/SKILL.md
Normal file
72
03_卡木(木)/木叶_视频内容/快手发布/SKILL.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
name: 快手发布
|
||||
description: >
|
||||
逆向快手创作者服务平台内部 API 发布视频(不打开浏览器)。Cookie 认证 → 获取上传 token →
|
||||
上传视频/封面 → 发布作品。封面自动取视频第一帧。
|
||||
triggers: 快手发布、发布到快手、快手登录、快手上传、kuaishou发布
|
||||
owner: 木叶
|
||||
group: 木
|
||||
version: "1.0"
|
||||
updated: "2026-03-10"
|
||||
---
|
||||
|
||||
# 快手发布 Skill(v1.0)
|
||||
|
||||
> **核心能力**:逆向快手创作者服务平台(cp.kuaishou.com)内部 API,Cookie 认证后全程 HTTP 操作。
|
||||
> **认证方式**:Playwright 快手扫码登录获取 Cookie,之后纯 API。
|
||||
> **推兔参考**:推兔对快手也是用 webview + 页面注入,本方案更进一步直接调 HTTP API。
|
||||
|
||||
---
|
||||
|
||||
## 一、逆向 API 流程(3 步)
|
||||
|
||||
```
|
||||
[Step 1] Cookie 认证
|
||||
Playwright 快手扫码 → kuaishou_storage_state.json
|
||||
登录地址: https://cp.kuaishou.com/article/publish/video
|
||||
关键 Cookie: kuaishou.server.web_st, kuaishou.server.web_ph
|
||||
|
||||
[Step 2] 上传视频 + 封面
|
||||
POST /rest/cp/creator/media/pc/upload/token → 上传凭证
|
||||
POST /rest/cp/creator/media/pc/upload/video → 上传视频
|
||||
POST /rest/cp/creator/media/pc/upload/image → 上传封面
|
||||
|
||||
[Step 3] 发布作品
|
||||
POST /rest/cp/creator/pc/publish/single
|
||||
body: {caption, videoId, cover, type, publishType}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 二、一键命令
|
||||
|
||||
```bash
|
||||
cd /Users/karuo/Documents/个人/卡若AI/03_卡木(木)/木叶_视频内容/快手发布/脚本
|
||||
|
||||
# 1. 首次或 Cookie 过期:快手扫码登录
|
||||
python3 kuaishou_login.py
|
||||
|
||||
# 2. 批量发布
|
||||
python3 kuaishou_publish.py
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 三、Cookie 有效期
|
||||
|
||||
| Cookie | 有效期 | 说明 |
|
||||
|--------|--------|------|
|
||||
| kuaishou.server.web_st | ~7-30 天 | 主认证 |
|
||||
| kuaishou.server.web_ph | ~7-30 天 | 辅助认证 |
|
||||
|
||||
快手 Cookie 有效期中等,建议每周检查一次。
|
||||
|
||||
---
|
||||
|
||||
## 四、相关文件
|
||||
|
||||
| 文件 | 说明 |
|
||||
|------|------|
|
||||
| `脚本/kuaishou_publish.py` | **主脚本**:逆向 API 视频上传+发布 |
|
||||
| `脚本/kuaishou_login.py` | Playwright 快手扫码登录 |
|
||||
| `脚本/kuaishou_storage_state.json` | Cookie 存储(生成后自动创建) |
|
||||
45
03_卡木(木)/木叶_视频内容/快手发布/脚本/kuaishou_login.py
Normal file
45
03_卡木(木)/木叶_视频内容/快手发布/脚本/kuaishou_login.py
Normal file
@@ -0,0 +1,45 @@
|
||||
#!/usr/bin/env python3
|
||||
"""快手 Cookie 获取 - Playwright 扫码登录 → 保存 storage_state"""
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
from playwright.async_api import async_playwright
|
||||
|
||||
COOKIE_FILE = Path(__file__).parent / "kuaishou_storage_state.json"
|
||||
LOGIN_URL = "https://cp.kuaishou.com/article/publish/video"
|
||||
|
||||
UA = (
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36"
|
||||
)
|
||||
|
||||
|
||||
async def main():
|
||||
print("即将弹出浏览器,请用快手 APP 扫码登录创作者服务平台。")
|
||||
print("登录成功后(看到创作中心页面),按 Enter 或在 Inspector 点绿色 ▶。\n")
|
||||
|
||||
async with async_playwright() as pw:
|
||||
browser = await pw.chromium.launch(headless=False)
|
||||
context = await browser.new_context(user_agent=UA, viewport={"width": 1280, "height": 720})
|
||||
await context.add_init_script("Object.defineProperty(navigator,'webdriver',{get:()=>undefined})")
|
||||
page = await context.new_page()
|
||||
await page.goto(LOGIN_URL, timeout=60000)
|
||||
|
||||
print("等待扫码登录...")
|
||||
try:
|
||||
await page.wait_for_url("**/article/publish/**", timeout=180000)
|
||||
await asyncio.sleep(3)
|
||||
except Exception:
|
||||
print("未自动检测到跳转,请手动确认已登录后按 Enter")
|
||||
await page.pause()
|
||||
|
||||
await context.storage_state(path=str(COOKIE_FILE))
|
||||
await context.close()
|
||||
await browser.close()
|
||||
|
||||
print(f"\n[✓] 快手 Cookie 已保存: {COOKIE_FILE}")
|
||||
print(f" 文件大小: {COOKIE_FILE.stat().st_size} bytes")
|
||||
print("现在可运行 kuaishou_publish.py 批量发布。")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
316
03_卡木(木)/木叶_视频内容/快手发布/脚本/kuaishou_publish.py
Normal file
316
03_卡木(木)/木叶_视频内容/快手发布/脚本/kuaishou_publish.py
Normal file
@@ -0,0 +1,316 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
快手纯 API 视频发布(无浏览器)
|
||||
逆向快手创作者服务平台 cp.kuaishou.com 内部 API
|
||||
|
||||
流程:
|
||||
1. 从 storage_state.json 加载 cookies
|
||||
2. 获取上传签名
|
||||
3. 分片上传视频
|
||||
4. 发布作品
|
||||
"""
|
||||
import asyncio
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
|
||||
import httpx
|
||||
|
||||
SCRIPT_DIR = Path(__file__).parent
|
||||
COOKIE_FILE = SCRIPT_DIR / "kuaishou_storage_state.json"
|
||||
VIDEO_DIR = Path("/Users/karuo/Movies/soul视频/soul 派对 119场 20260309_output/成片")
|
||||
|
||||
sys.path.insert(0, str(SCRIPT_DIR.parent.parent / "多平台分发" / "脚本"))
|
||||
from cookie_manager import CookieManager
|
||||
from video_utils import extract_cover, extract_cover_bytes
|
||||
|
||||
CP_HOST = "https://cp.kuaishou.com"
|
||||
CHUNK_SIZE = 4 * 1024 * 1024
|
||||
|
||||
UA = (
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36"
|
||||
)
|
||||
|
||||
TITLES = {
|
||||
"早起不是为了开派对,是不吵老婆睡觉.mp4":
|
||||
"每天6点起床不是因为自律 是因为老婆还在睡 #Soul派对 #创业日记",
|
||||
"懒人的活法 动作简单有利可图正反馈.mp4":
|
||||
"懒人也能赚钱?动作简单有利可图正反馈 #Soul派对 #副业思维",
|
||||
"初期团队先找两个IS,比钱好使 ENFJ链接人,ENTJ指挥.mp4":
|
||||
"创业初期先找两个IS型人格 比融资好使十倍 #MBTI创业 #团队搭建",
|
||||
"ICU出来一年多 活着要在互联网上留下东西.mp4":
|
||||
"ICU出来一年多 活着就要在互联网上留下东西 #人生感悟 #创业觉醒",
|
||||
"MBTI疗愈SOUL 年轻人测MBTI,40到60岁走五行八卦.mp4":
|
||||
"20岁测MBTI 40岁该学五行八卦了 #MBTI #认知觉醒",
|
||||
"Soul业务模型 派对+切片+小程序全链路.mp4":
|
||||
"派对获客AI切片小程序变现 全链路拆解 #商业模式 #一人公司",
|
||||
"Soul切片30秒到8分钟 AI半小时能剪10到30个.mp4":
|
||||
"AI剪辑半小时出10到30条切片 内容工厂效率密码 #AI剪辑 #内容效率",
|
||||
"刷牙听业务逻辑 Soul切片变现怎么跑.mp4":
|
||||
"刷牙3分钟听完一套变现逻辑 #碎片创业 #副业逻辑",
|
||||
"国学易经怎么学 两小时七七八八,召唤作者对话.mp4":
|
||||
"易经两小时学个七七八八 跟古人对话 #国学 #易经入门",
|
||||
"广点通能投Soul了,1000曝光6到10块.mp4":
|
||||
"广点通能投Soul了 1000曝光只要6到10块 #广点通 #低成本获客",
|
||||
"建立信任不是求来的 卖外挂发邮件三个月拿下德国总代.mp4":
|
||||
"信任不是求来的 发三个月邮件拿下德国总代理 #销售思维 #信任建立",
|
||||
"核心就两个字 筛选。能开派对坚持7天的人再谈.mp4":
|
||||
"核心就两个字筛选 能坚持7天的人才值得深聊 #筛选思维 #创业认知",
|
||||
"睡眠不好?每天放下一件事,做减法.mp4":
|
||||
"睡不好不是太累 是脑子装太多 每天做减法 #做减法 #心理健康",
|
||||
"这套体系花了170万,但前端几十块就能参与.mp4":
|
||||
"后端花170万搭体系 前端几十块就能参与 #商业认知 #体系思维",
|
||||
"金融AI获客体系 后端30人沉淀12年,前端丢手机.mp4":
|
||||
"后端30人沉淀12年 前端就丢个手机号 #AI获客 #系统思维",
|
||||
}
|
||||
|
||||
|
||||
def _build_headers(cookies: CookieManager) -> dict:
|
||||
return {
|
||||
"Cookie": cookies.cookie_str,
|
||||
"User-Agent": UA,
|
||||
"Referer": "https://cp.kuaishou.com/article/publish/video",
|
||||
"Origin": "https://cp.kuaishou.com",
|
||||
}
|
||||
|
||||
|
||||
async def check_login(client: httpx.AsyncClient, cookies: CookieManager) -> dict:
|
||||
"""检查登录状态"""
|
||||
url = f"{CP_HOST}/rest/cp/creator/pc/home/infoV2"
|
||||
resp = await client.get(url, headers=_build_headers(cookies))
|
||||
try:
|
||||
data = resp.json()
|
||||
if data.get("result") == 1:
|
||||
return data.get("data", data)
|
||||
except Exception:
|
||||
pass
|
||||
return {}
|
||||
|
||||
|
||||
async def get_upload_token(client: httpx.AsyncClient, cookies: CookieManager) -> dict:
|
||||
"""获取上传凭证"""
|
||||
print(" [1] 获取上传凭证...")
|
||||
url = f"{CP_HOST}/rest/cp/creator/media/pc/upload/token"
|
||||
body = {"type": "video"}
|
||||
resp = await client.post(
|
||||
url, json=body,
|
||||
headers={**_build_headers(cookies), "Content-Type": "application/json"},
|
||||
timeout=15.0,
|
||||
)
|
||||
data = resp.json()
|
||||
if data.get("result") != 1:
|
||||
url2 = f"{CP_HOST}/rest/cp/creator/pc/publish/uploadToken"
|
||||
resp2 = await client.post(
|
||||
url2, json=body,
|
||||
headers={**_build_headers(cookies), "Content-Type": "application/json"},
|
||||
timeout=15.0,
|
||||
)
|
||||
data = resp2.json()
|
||||
print(f" 凭证: {json.dumps(data, ensure_ascii=False)[:200]}")
|
||||
return data
|
||||
|
||||
|
||||
async def upload_video(
|
||||
client: httpx.AsyncClient, cookies: CookieManager,
|
||||
upload_info: dict, file_path: str
|
||||
) -> str:
|
||||
"""上传视频"""
|
||||
print(" [2] 上传视频...")
|
||||
token_data = upload_info.get("data", upload_info)
|
||||
upload_url = token_data.get("uploadUrl", token_data.get("upload_url", ""))
|
||||
upload_token = token_data.get("uploadToken", token_data.get("token", ""))
|
||||
|
||||
if not upload_url:
|
||||
upload_url = f"{CP_HOST}/rest/cp/creator/media/pc/upload/video"
|
||||
|
||||
raw = Path(file_path).read_bytes()
|
||||
fname = Path(file_path).name
|
||||
|
||||
if upload_token:
|
||||
resp = await client.post(
|
||||
upload_url,
|
||||
files={"file": (fname, raw, "video/mp4")},
|
||||
data={"token": upload_token},
|
||||
headers={
|
||||
"Cookie": cookies.cookie_str,
|
||||
"User-Agent": UA,
|
||||
"Referer": "https://cp.kuaishou.com/",
|
||||
},
|
||||
timeout=300.0,
|
||||
)
|
||||
else:
|
||||
resp = await client.post(
|
||||
upload_url,
|
||||
files={"file": (fname, raw, "video/mp4")},
|
||||
headers={
|
||||
"Cookie": cookies.cookie_str,
|
||||
"User-Agent": UA,
|
||||
"Referer": "https://cp.kuaishou.com/",
|
||||
},
|
||||
timeout=300.0,
|
||||
)
|
||||
|
||||
try:
|
||||
data = resp.json()
|
||||
vid = (
|
||||
data.get("data", {}).get("videoId", "")
|
||||
or data.get("data", {}).get("video_id", "")
|
||||
or data.get("data", {}).get("fileId", "")
|
||||
)
|
||||
print(f" 视频 ID: {vid}")
|
||||
return vid
|
||||
except Exception:
|
||||
print(f" 上传响应: {resp.status_code} {resp.text[:200]}")
|
||||
return ""
|
||||
|
||||
|
||||
async def upload_cover(
|
||||
client: httpx.AsyncClient, cookies: CookieManager, cover_path: str
|
||||
) -> str:
|
||||
"""上传封面"""
|
||||
if not cover_path or not Path(cover_path).exists():
|
||||
return ""
|
||||
print(" [*] 上传封面...")
|
||||
url = f"{CP_HOST}/rest/cp/creator/media/pc/upload/image"
|
||||
with open(cover_path, "rb") as f:
|
||||
img_data = f.read()
|
||||
resp = await client.post(
|
||||
url,
|
||||
files={"file": ("cover.jpg", img_data, "image/jpeg")},
|
||||
headers={
|
||||
"Cookie": cookies.cookie_str,
|
||||
"User-Agent": UA,
|
||||
"Referer": "https://cp.kuaishou.com/",
|
||||
},
|
||||
timeout=30.0,
|
||||
)
|
||||
try:
|
||||
data = resp.json()
|
||||
cover_id = data.get("data", {}).get("url", data.get("data", {}).get("imageUrl", ""))
|
||||
if cover_id:
|
||||
print(f" 封面: {cover_id[:60]}...")
|
||||
return cover_id
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
|
||||
async def publish_work(
|
||||
client: httpx.AsyncClient, cookies: CookieManager,
|
||||
title: str, video_id: str, cover_url: str = "",
|
||||
) -> dict:
|
||||
"""发布作品"""
|
||||
print(" [3] 发布作品...")
|
||||
url = f"{CP_HOST}/rest/cp/creator/pc/publish/single"
|
||||
|
||||
body = {
|
||||
"caption": title,
|
||||
"videoId": video_id,
|
||||
"cover": cover_url,
|
||||
"type": 1,
|
||||
"publishType": 0,
|
||||
}
|
||||
|
||||
resp = await client.post(
|
||||
url, json=body,
|
||||
headers={**_build_headers(cookies), "Content-Type": "application/json"},
|
||||
timeout=30.0,
|
||||
)
|
||||
data = resp.json() if resp.status_code == 200 else {}
|
||||
print(f" 响应: {json.dumps(data, ensure_ascii=False)[:300]}")
|
||||
return data
|
||||
|
||||
|
||||
async def publish_one(video_path: str, title: str, idx: int = 1, total: int = 1) -> bool:
|
||||
fname = Path(video_path).name
|
||||
fsize = Path(video_path).stat().st_size
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f" [{idx}/{total}] {fname}")
|
||||
print(f" 大小: {fsize/1024/1024:.1f}MB")
|
||||
print(f" 标题: {title[:60]}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
try:
|
||||
cookies = CookieManager(COOKIE_FILE, "kuaishou.com")
|
||||
if not cookies.is_valid():
|
||||
print(" [✗] Cookie 已过期,请重新运行 kuaishou_login.py")
|
||||
return False
|
||||
|
||||
async with httpx.AsyncClient(timeout=60.0, follow_redirects=True) as client:
|
||||
user = await check_login(client, cookies)
|
||||
if not user:
|
||||
print(" [✗] Cookie 无效,请重新登录")
|
||||
return False
|
||||
|
||||
cover_path = extract_cover(video_path)
|
||||
|
||||
upload_info = await get_upload_token(client, cookies)
|
||||
video_id = await upload_video(client, cookies, upload_info, video_path)
|
||||
if not video_id:
|
||||
print(" [✗] 视频上传失败")
|
||||
return False
|
||||
|
||||
cover_url = await upload_cover(client, cookies, cover_path) if cover_path else ""
|
||||
result = await publish_work(client, cookies, title, video_id, cover_url)
|
||||
|
||||
if result.get("result") == 1:
|
||||
print(f" [✓] 发布成功!")
|
||||
return True
|
||||
else:
|
||||
print(f" [✗] 发布失败: {result.get('error_msg', 'unknown')}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print(f" [✗] 异常: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
if not COOKIE_FILE.exists():
|
||||
print("[✗] Cookie 不存在,请先运行 kuaishou_login.py")
|
||||
return 1
|
||||
|
||||
cookies = CookieManager(COOKIE_FILE, "kuaishou.com")
|
||||
expiry = cookies.check_expiry()
|
||||
print(f"[i] Cookie 状态: {expiry['message']}")
|
||||
|
||||
async with httpx.AsyncClient(timeout=15.0) as c:
|
||||
user = await check_login(c, cookies)
|
||||
if not user:
|
||||
print("[✗] Cookie 无效")
|
||||
return 1
|
||||
print(f"[✓] 已登录\n")
|
||||
|
||||
videos = sorted(VIDEO_DIR.glob("*.mp4"))
|
||||
if not videos:
|
||||
print("[✗] 未找到视频")
|
||||
return 1
|
||||
print(f"[i] 共 {len(videos)} 条视频\n")
|
||||
|
||||
results = []
|
||||
for i, vp in enumerate(videos):
|
||||
title = TITLES.get(vp.name, f"{vp.stem} #Soul派对 #创业日记")
|
||||
ok = await publish_one(str(vp), title, i + 1, len(videos))
|
||||
results.append((vp.name, ok))
|
||||
if i < len(videos) - 1:
|
||||
await asyncio.sleep(5)
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(" 快手发布汇总")
|
||||
print(f"{'='*60}")
|
||||
for name, ok in results:
|
||||
print(f" [{'✓' if ok else '✗'}] {name}")
|
||||
success = sum(1 for _, ok in results if ok)
|
||||
print(f"\n 成功: {success}/{len(results)}")
|
||||
return 0 if success == len(results) else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(asyncio.run(main()))
|
||||
@@ -158,10 +158,13 @@ Content-Type: application/octet-stream
|
||||
Content-CRC32: {hex_crc32}
|
||||
Body: 文件二进制
|
||||
|
||||
# finish 阶段
|
||||
# finish 阶段(单 chunk)
|
||||
POST /upload/v1/{storeUri}?uploadid={UploadID}&phase=finish
|
||||
Content-Type: text/plain
|
||||
Body: "1:{server_returned_crc32}"
|
||||
|
||||
# finish 阶段(多 chunk)★ 必须用逗号分隔,不是换行!
|
||||
Body: "1:{crc32_1},2:{crc32_2},3:{crc32_3}"
|
||||
```
|
||||
|
||||
### 4.5 SecurityKeys 加载注意事项
|
||||
@@ -199,9 +202,15 @@ Body: "1:{server_returned_crc32}"
|
||||
|
||||
**现象**:`/upload/v1/` 协议的 finish 阶段返回 `code=4019, message="Mismatch Part List"`。
|
||||
|
||||
**根因**:finish 请求 body 格式为 `{partNumber}:{crc32}`,partNumber 必须与 transfer 阶段一致。transfer 用 `part_number=1`,则 finish body 必须是 `1:{crc32}`(不是 `0:{crc32}`)。
|
||||
**根因**:
|
||||
- **单 chunk**:body 为 `1:{crc32}`(换行也行)→ 可以成功
|
||||
- **多 chunk**:body 必须用**逗号分隔**,如 `1:{crc32_1},2:{crc32_2},3:{crc32_3}`
|
||||
- 用 `\n` 或 `\r\n` 分隔多 chunk 都会返回 4019
|
||||
|
||||
**关键**:crc32 值使用**服务端返回的** `data.crc32`,不是客户端计算的。
|
||||
**关键**:
|
||||
- crc32 值使用**服务端返回的** `data.crc32`,不是客户端计算的
|
||||
- partNumber 从 1 开始(不是 0)
|
||||
- 多 chunk 分隔符是**逗号**(`,`),不是换行(`\n`)
|
||||
|
||||
### 5.3 SecurityKeys 读取错误的 sign 数据
|
||||
|
||||
|
||||
278
03_卡木(木)/木叶_视频内容/抖音发布/脚本/douyin_auto_browser_publish.py
Normal file
278
03_卡木(木)/木叶_视频内容/抖音发布/脚本/douyin_auto_browser_publish.py
Normal file
@@ -0,0 +1,278 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
抖音自动浏览器发布 - Playwright 全自动化
|
||||
流程: 打开创作者中心 → 上传视频 → 填标题 → 选封面(第一帧) → 发布
|
||||
首次需扫码登录,之后全程自动
|
||||
"""
|
||||
import asyncio
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
from playwright.async_api import async_playwright
|
||||
|
||||
SCRIPT_DIR = Path(__file__).parent
|
||||
COOKIE_FILE = SCRIPT_DIR / "douyin_storage_state.json"
|
||||
VIDEO_DIR = Path("/Users/karuo/Movies/soul视频/soul 派对 119场 20260309_output/成片")
|
||||
|
||||
PUBLISH_URL = "https://creator.douyin.com/creator-micro/content/post/video"
|
||||
|
||||
TITLES = {
|
||||
"早起不是为了开派对,是不吵老婆睡觉.mp4":
|
||||
"每天6点起床不是因为自律,是因为老婆还在睡。创业人最真实的起床理由,你猜到了吗?\n\n#Soul派对 #创业日记 #晨间直播 #真实创业",
|
||||
"懒人的活法 动作简单有利可图正反馈.mp4":
|
||||
"懒人也能赚钱?关键就三个词:动作简单、有利可图、正反馈。90%的人输在太勤快了\n\n#Soul派对 #副业思维 #私域变现 #认知升级",
|
||||
"初期团队先找两个IS,比钱好使 ENFJ链接人,ENTJ指挥.mp4":
|
||||
"创业初期别急着找钱,先找两个IS型人格。ENFJ负责链接,ENTJ负责指挥,比融资好使十倍\n\n#MBTI创业 #团队搭建 #Soul派对 #合伙人",
|
||||
"ICU出来一年多 活着要在互联网上留下东西.mp4":
|
||||
"ICU出来一年多了。那之后我想明白一件事:活着,就要在互联网上留下点东西\n\n#人生感悟 #创业觉醒 #Soul派对 #向死而生",
|
||||
"MBTI疗愈SOUL 年轻人测MBTI,40到60岁走五行八卦.mp4":
|
||||
"20岁测MBTI,40岁以后该学五行八卦了。年轻人用性格分类,中年人靠命理运营自己\n\n#MBTI #五行 #Soul派对 #认知觉醒",
|
||||
"Soul业务模型 派对+切片+小程序全链路.mp4":
|
||||
"一个人怎么跑通一条商业链路?派对获客→AI切片→小程序变现,全链路拆给你看\n\n#Soul派对 #商业模式 #全链路 #一人公司",
|
||||
"Soul切片30秒到8分钟 AI半小时能剪10到30个.mp4":
|
||||
"AI剪辑有多快?30秒到8分钟的切片,半小时出10到30条。内容工厂的效率密码\n\n#AI剪辑 #Soul派对 #内容效率 #批量生产",
|
||||
"刷牙听业务逻辑 Soul切片变现怎么跑.mp4":
|
||||
"刷牙3分钟,刚好听完一套变现逻辑。Soul切片怎么从0到日产30条?碎片时间才是生产力\n\n#Soul派对 #碎片创业 #副业逻辑 #效率",
|
||||
"国学易经怎么学 两小时七七八八,召唤作者对话.mp4":
|
||||
"易经其实不难,两小时就能学个七七八八。关键是找到作者的思维频率,跟古人对话\n\n#国学 #易经入门 #Soul派对 #终身学习",
|
||||
"广点通能投Soul了,1000曝光6到10块.mp4":
|
||||
"广点通终于能投Soul了!1000次曝光只要6到10块,这个获客成本你敢信?\n\n#Soul派对 #广点通投放 #低成本获客 #流量红利",
|
||||
"建立信任不是求来的 卖外挂发邮件三个月拿下德国总代.mp4":
|
||||
"信任不是求来的。一个卖外挂的小伙子,发了三个月邮件,拿下德国总代理。死磕比社交有用\n\n#销售思维 #信任建立 #Soul派对 #死磕精神",
|
||||
"核心就两个字 筛选。能开派对坚持7天的人再谈.mp4":
|
||||
"别跟所有人合作,核心就两个字:筛选。能坚持开7天派对的人,才值得深聊\n\n#筛选思维 #Soul派对 #创业认知 #人性",
|
||||
"睡眠不好?每天放下一件事,做减法.mp4":
|
||||
"睡不好不是因为太累,是因为脑子里装太多。每天放下一件事,做减法,睡眠自然好\n\n#睡眠 #做减法 #Soul派对 #心理健康",
|
||||
"这套体系花了170万,但前端几十块就能参与.mp4":
|
||||
"后端花了170万搭的体系,前端几十块就能参与。真正的商业模式是让别人低成本上车\n\n#商业认知 #Soul派对 #低门槛创业 #体系思维",
|
||||
"金融AI获客体系 后端30人沉淀12年,前端丢手机.mp4":
|
||||
"后端30人沉淀了12年,前端操作就是丢个手机号。金融AI获客体系,把复杂留给自己\n\n#AI获客 #金融科技 #Soul派对 #系统思维",
|
||||
}
|
||||
|
||||
|
||||
async def wait_for_upload_complete(page, timeout=180):
|
||||
"""等待视频上传完成(进度条消失或出现封面选择)"""
|
||||
print(" 等待上传完成...")
|
||||
start = time.time()
|
||||
while time.time() - start < timeout:
|
||||
try:
|
||||
progress = await page.query_selector('[class*="progress"]')
|
||||
upload_text = await page.query_selector('text=上传中')
|
||||
cover_section = await page.query_selector('text=选择封面')
|
||||
publish_btn = await page.query_selector('button:has-text("发布")')
|
||||
|
||||
if cover_section or (publish_btn and not upload_text and not progress):
|
||||
print(" 上传完成!")
|
||||
return True
|
||||
except Exception:
|
||||
pass
|
||||
await asyncio.sleep(2)
|
||||
print(" 上传超时")
|
||||
return False
|
||||
|
||||
|
||||
async def publish_one_video(page, video_path: Path, title: str, idx: int, total: int):
|
||||
"""发布单条视频"""
|
||||
print(f"\n{'='*60}")
|
||||
print(f" [{idx}/{total}] {video_path.name}")
|
||||
print(f" 大小: {video_path.stat().st_size / 1024 / 1024:.1f}MB")
|
||||
print(f" 标题: {title[:60]}...")
|
||||
print(f"{'='*60}")
|
||||
|
||||
await page.goto(PUBLISH_URL, wait_until="networkidle", timeout=60000)
|
||||
await asyncio.sleep(3)
|
||||
|
||||
# check for login redirect
|
||||
if "login" in page.url.lower():
|
||||
print(" [!] 需要重新登录,请扫码...")
|
||||
await page.pause()
|
||||
await page.goto(PUBLISH_URL, wait_until="networkidle", timeout=60000)
|
||||
await asyncio.sleep(3)
|
||||
|
||||
print(" [1] 上传视频...")
|
||||
file_input = await page.query_selector('input[type="file"][accept*="video"]')
|
||||
if not file_input:
|
||||
file_input = await page.query_selector('input[type="file"]')
|
||||
if not file_input:
|
||||
print(" [!] 未找到文件上传入口,尝试点击上传区域...")
|
||||
upload_area = await page.query_selector('[class*="upload"]')
|
||||
if upload_area:
|
||||
await upload_area.click()
|
||||
await asyncio.sleep(1)
|
||||
file_input = await page.query_selector('input[type="file"]')
|
||||
|
||||
if not file_input:
|
||||
print(" [✗] 找不到文件上传元素")
|
||||
return False
|
||||
|
||||
await file_input.set_input_files(str(video_path))
|
||||
print(f" 文件已选择: {video_path.name}")
|
||||
|
||||
if not await wait_for_upload_complete(page):
|
||||
print(" [✗] 上传超时")
|
||||
return False
|
||||
|
||||
await asyncio.sleep(2)
|
||||
|
||||
print(" [2] 填写标题...")
|
||||
text_editor = await page.query_selector('[class*="editor-kit-container"]')
|
||||
if not text_editor:
|
||||
text_editor = await page.query_selector('[class*="text-editor"]')
|
||||
if not text_editor:
|
||||
text_editor = await page.query_selector('[contenteditable="true"]')
|
||||
if not text_editor:
|
||||
text_editor = await page.query_selector('.notranslate[contenteditable]')
|
||||
|
||||
if text_editor:
|
||||
await text_editor.click()
|
||||
await asyncio.sleep(0.5)
|
||||
await page.keyboard.press("Meta+a")
|
||||
await asyncio.sleep(0.3)
|
||||
await page.keyboard.press("Backspace")
|
||||
await asyncio.sleep(0.3)
|
||||
for line in title.split('\n'):
|
||||
await page.keyboard.type(line, delay=20)
|
||||
await page.keyboard.press("Enter")
|
||||
print(f" 标题已填写")
|
||||
else:
|
||||
print(" [!] 未找到标题编辑器,尝试 textarea...")
|
||||
textarea = await page.query_selector('textarea')
|
||||
if textarea:
|
||||
await textarea.fill(title)
|
||||
print(f" 标题已填写 (textarea)")
|
||||
else:
|
||||
print(" [!] 无法填写标题")
|
||||
|
||||
await asyncio.sleep(1)
|
||||
|
||||
print(" [3] 封面设置为第一帧...")
|
||||
# poster_delay=0 in the UI means first frame is already default
|
||||
# just ensure we don't change it
|
||||
|
||||
print(" [4] 点击发布...")
|
||||
publish_btn = await page.query_selector('button:has-text("发布")')
|
||||
if not publish_btn:
|
||||
publish_btn = await page.query_selector('[class*="publish"]:has-text("发布")')
|
||||
|
||||
if publish_btn:
|
||||
is_disabled = await publish_btn.get_attribute("disabled")
|
||||
if is_disabled:
|
||||
print(" 发布按钮禁用,等待...")
|
||||
for _ in range(30):
|
||||
await asyncio.sleep(2)
|
||||
is_disabled = await publish_btn.get_attribute("disabled")
|
||||
if not is_disabled:
|
||||
break
|
||||
|
||||
await publish_btn.click()
|
||||
print(" 已点击发布")
|
||||
await asyncio.sleep(5)
|
||||
|
||||
# check result
|
||||
success_text = await page.query_selector('text=发布成功')
|
||||
manage_text = await page.query_selector('text=作品管理')
|
||||
if success_text or manage_text or "manage" in page.url.lower():
|
||||
print(" [✓] 发布成功!")
|
||||
return True
|
||||
|
||||
# check for verify dialog
|
||||
verify = await page.query_selector('text=身份验证')
|
||||
if verify:
|
||||
print(" [!] 需要身份验证,请手动完成...")
|
||||
await page.pause()
|
||||
return True
|
||||
|
||||
print(f" [?] 发布状态未知,当前页面: {page.url[:80]}")
|
||||
await asyncio.sleep(3)
|
||||
return True
|
||||
else:
|
||||
print(" [✗] 未找到发布按钮")
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
print("=" * 60)
|
||||
print(" 抖音自动浏览器发布 - 全自动模式")
|
||||
print("=" * 60)
|
||||
|
||||
videos = sorted(VIDEO_DIR.glob("*.mp4"))
|
||||
if not videos:
|
||||
print("[✗] 未找到视频")
|
||||
return 1
|
||||
|
||||
print(f"\n[i] 共 {len(videos)} 条视频待发布:\n")
|
||||
for i, v in enumerate(videos, 1):
|
||||
title = TITLES.get(v.name, f"{v.stem}")
|
||||
print(f" {i:2d}. {v.name[:60]}")
|
||||
print()
|
||||
|
||||
async with async_playwright() as pw:
|
||||
browser = await pw.chromium.launch(
|
||||
headless=False,
|
||||
args=["--disable-blink-features=AutomationControlled"],
|
||||
)
|
||||
|
||||
if COOKIE_FILE.exists():
|
||||
context = await browser.new_context(
|
||||
storage_state=str(COOKIE_FILE),
|
||||
user_agent=(
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) "
|
||||
"Chrome/143.0.0.0 Safari/537.36"
|
||||
),
|
||||
viewport={"width": 1280, "height": 900},
|
||||
)
|
||||
else:
|
||||
context = await browser.new_context(
|
||||
user_agent=(
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) "
|
||||
"Chrome/143.0.0.0 Safari/537.36"
|
||||
),
|
||||
viewport={"width": 1280, "height": 900},
|
||||
)
|
||||
|
||||
await context.add_init_script("""
|
||||
Object.defineProperty(navigator, 'webdriver', { get: () => undefined });
|
||||
""")
|
||||
|
||||
page = await context.new_page()
|
||||
await page.goto(PUBLISH_URL, wait_until="networkidle", timeout=60000)
|
||||
await asyncio.sleep(3)
|
||||
|
||||
if "login" in page.url.lower():
|
||||
print("[!] 需要登录,请扫码登录抖音创作者中心...")
|
||||
print(" 登录成功后点 Playwright Inspector 的绿色 ▶\n")
|
||||
await page.pause()
|
||||
|
||||
await context.storage_state(path=str(COOKIE_FILE))
|
||||
print("[✓] Cookie 已保存\n")
|
||||
|
||||
results = []
|
||||
for i, vp in enumerate(videos, 1):
|
||||
title = TITLES.get(vp.name, f"{vp.stem} #Soul派对 #创业日记")
|
||||
ok = await publish_one_video(page, vp, title, i, len(videos))
|
||||
results.append((vp.name, ok))
|
||||
|
||||
if ok and i < len(videos):
|
||||
await context.storage_state(path=str(COOKIE_FILE))
|
||||
print(" 等待 10s 后发下一条...")
|
||||
await asyncio.sleep(10)
|
||||
|
||||
await context.storage_state(path=str(COOKIE_FILE))
|
||||
await context.close()
|
||||
await browser.close()
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(" 发布汇总")
|
||||
print(f"{'='*60}")
|
||||
for name, ok in results:
|
||||
s = "✓" if ok else "✗"
|
||||
print(f" [{s}] {name}")
|
||||
success = sum(1 for _, ok in results if ok)
|
||||
print(f"\n 成功: {success}/{len(results)}")
|
||||
return 0 if success == len(results) else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(asyncio.run(main()))
|
||||
372
03_卡木(木)/木叶_视频内容/抖音发布/脚本/douyin_batch_now.py
Normal file
372
03_卡木(木)/木叶_视频内容/抖音发布/脚本/douyin_batch_now.py
Normal file
@@ -0,0 +1,372 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
抖音批量发布 - 登录+发布一体化
|
||||
Cookie 过期时自动弹窗重新扫码,登录后立刻发布。
|
||||
"""
|
||||
import asyncio
|
||||
import base64
|
||||
import datetime
|
||||
import hashlib
|
||||
import hmac
|
||||
import json
|
||||
import os
|
||||
import random
|
||||
import string
|
||||
import sys
|
||||
import time
|
||||
import zlib
|
||||
from pathlib import Path
|
||||
from urllib.parse import urlencode
|
||||
|
||||
import httpx
|
||||
from cryptography.hazmat.primitives.asymmetric import ec
|
||||
from cryptography.hazmat.primitives import serialization
|
||||
from cryptography import x509
|
||||
from playwright.async_api import async_playwright
|
||||
|
||||
SCRIPT_DIR = Path(__file__).parent
|
||||
COOKIE_FILE = SCRIPT_DIR / "douyin_storage_state.json"
|
||||
VIDEO_DIR = Path("/Users/karuo/Movies/soul视频/soul 派对 119场 20260309_output/成片")
|
||||
BASE = "https://creator.douyin.com"
|
||||
VOD_HOST = "https://vod.bytedanceapi.com"
|
||||
CHUNK_SIZE = 3 * 1024 * 1024
|
||||
UA = (
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36"
|
||||
)
|
||||
USER_ID = ""
|
||||
|
||||
TITLES = {
|
||||
"早起不是为了开派对,是不吵老婆睡觉.mp4":
|
||||
"每天6点起床不是因为自律,是因为老婆还在睡。创业人最真实的起床理由,你猜到了吗?#Soul派对 #创业日记 #晨间直播 #真实创业",
|
||||
"懒人的活法 动作简单有利可图正反馈.mp4":
|
||||
"懒人也能赚钱?关键就三个词:动作简单、有利可图、正反馈。90%的人输在太勤快了 #Soul派对 #副业思维 #私域变现 #认知升级",
|
||||
"初期团队先找两个IS,比钱好使 ENFJ链接人,ENTJ指挥.mp4":
|
||||
"创业初期别急着找钱,先找两个IS型人格。ENFJ负责链接,ENTJ负责指挥,比融资好使十倍 #MBTI创业 #团队搭建 #Soul派对 #合伙人",
|
||||
"ICU出来一年多 活着要在互联网上留下东西.mp4":
|
||||
"ICU出来一年多了。那之后我想明白一件事:活着,就要在互联网上留下点东西 #人生感悟 #创业觉醒 #Soul派对 #向死而生",
|
||||
"MBTI疗愈SOUL 年轻人测MBTI,40到60岁走五行八卦.mp4":
|
||||
"20岁测MBTI,40岁以后该学五行八卦了。年轻人用性格分类,中年人靠命理运营自己 #MBTI #五行 #Soul派对 #认知觉醒",
|
||||
"Soul业务模型 派对+切片+小程序全链路.mp4":
|
||||
"一个人怎么跑通一条商业链路?派对获客→AI切片→小程序变现,全链路拆给你看 #Soul派对 #商业模式 #全链路 #一人公司",
|
||||
"Soul切片30秒到8分钟 AI半小时能剪10到30个.mp4":
|
||||
"AI剪辑有多快?30秒到8分钟的切片,半小时出10到30条。内容工厂的效率密码 #AI剪辑 #Soul派对 #内容效率 #批量生产",
|
||||
"刷牙听业务逻辑 Soul切片变现怎么跑.mp4":
|
||||
"刷牙3分钟,刚好听完一套变现逻辑。Soul切片怎么从0到日产30条?碎片时间才是生产力 #Soul派对 #碎片创业 #副业逻辑 #效率",
|
||||
"国学易经怎么学 两小时七七八八,召唤作者对话.mp4":
|
||||
"易经其实不难,两小时就能学个七七八八。关键是找到作者的思维频率,跟古人对话 #国学 #易经入门 #Soul派对 #终身学习",
|
||||
"广点通能投Soul了,1000曝光6到10块.mp4":
|
||||
"广点通终于能投Soul了!1000次曝光只要6到10块,这个获客成本你敢信?#Soul派对 #广点通投放 #低成本获客 #流量红利",
|
||||
"建立信任不是求来的 卖外挂发邮件三个月拿下德国总代.mp4":
|
||||
"信任不是求来的。一个卖外挂的小伙子,发了三个月邮件,拿下德国总代理。死磕比社交有用 #销售思维 #信任建立 #Soul派对 #死磕精神",
|
||||
"核心就两个字 筛选。能开派对坚持7天的人再谈.mp4":
|
||||
"别跟所有人合作,核心就两个字:筛选。能坚持开7天派对的人,才值得深聊 #筛选思维 #Soul派对 #创业认知 #人性",
|
||||
"睡眠不好?每天放下一件事,做减法.mp4":
|
||||
"睡不好不是因为太累,是因为脑子里装太多。每天放下一件事,做减法,睡眠自然好 #睡眠 #做减法 #Soul派对 #心理健康",
|
||||
"这套体系花了170万,但前端几十块就能参与.mp4":
|
||||
"后端花了170万搭的体系,前端几十块就能参与。真正的商业模式是让别人低成本上车 #商业认知 #Soul派对 #低门槛创业 #体系思维",
|
||||
"金融AI获客体系 后端30人沉淀12年,前端丢手机.mp4":
|
||||
"后端30人沉淀了12年,前端操作就是丢个手机号。金融AI获客体系,把复杂留给自己 #AI获客 #金融科技 #Soul派对 #系统思维",
|
||||
}
|
||||
|
||||
|
||||
class SecurityKeys:
|
||||
def __init__(self, state_path: Path):
|
||||
with open(state_path, "r", encoding="utf-8") as f:
|
||||
state = json.load(f)
|
||||
self.cookies = {c["name"]: c["value"] for c in state.get("cookies", [])
|
||||
if "douyin.com" in c.get("domain", "")}
|
||||
self.cookie_str = "; ".join(f"{k}={v}" for k, v in self.cookies.items())
|
||||
self.ms_token = ""
|
||||
self.ec_private_key = None
|
||||
self.ec_public_key_bytes = b""
|
||||
self.server_public_key = None
|
||||
self.ticket = ""
|
||||
self.ts_sign_raw = ""
|
||||
self.csrf_token = self.cookies.get("passport_csrf_token", "")
|
||||
for origin in state.get("origins", []):
|
||||
if "creator.douyin.com" not in origin.get("origin", ""):
|
||||
continue
|
||||
for item in origin.get("localStorage", []):
|
||||
name, val = item["name"], item["value"]
|
||||
if "s_sdk_crypt_sdk" in name:
|
||||
d = json.loads(json.loads(val)["data"])
|
||||
pem = d["ec_privateKey"].replace("\\r\\n", "\n")
|
||||
self.ec_private_key = serialization.load_pem_private_key(pem.encode(), password=None)
|
||||
self.ec_public_key_bytes = self.ec_private_key.public_key().public_bytes(
|
||||
serialization.Encoding.X962, serialization.PublicFormat.UncompressedPoint)
|
||||
elif "s_sdk_server_cert_key" in name:
|
||||
cert_pem = json.loads(val)["cert"]
|
||||
cert = x509.load_pem_x509_certificate(cert_pem.encode())
|
||||
self.server_public_key = cert.public_key()
|
||||
elif "s_sdk_sign_data_key" in name and "web_protect" in name:
|
||||
d = json.loads(json.loads(val)["data"])
|
||||
self.ticket = d["ticket"]
|
||||
self.ts_sign_raw = d["ts_sign"]
|
||||
elif name == "xmst":
|
||||
self.ms_token = val
|
||||
|
||||
def compute_ticket_guard(self, path: str) -> dict:
|
||||
if not self.ec_private_key or not self.server_public_key:
|
||||
return {}
|
||||
eph_priv = ec.generate_private_key(ec.SECP256R1())
|
||||
eph_pub_bytes = eph_priv.public_key().public_bytes(
|
||||
serialization.Encoding.X962, serialization.PublicFormat.UncompressedPoint)
|
||||
shared_secret = eph_priv.exchange(ec.ECDH(), self.server_public_key)
|
||||
ts = int(time.time())
|
||||
ts_hex = self.ts_sign_raw.replace("ts.2.", "")
|
||||
ts_bytes = bytes.fromhex(ts_hex)
|
||||
new_first = bytes(a ^ b for a, b in zip(ts_bytes[:32], shared_secret[:32]))
|
||||
new_ts_sign = "ts.2." + (new_first + ts_bytes[32:]).hex()
|
||||
msg = f"{self.ticket},{path},{ts}"
|
||||
req_sign = hmac.new(shared_secret, msg.encode(), hashlib.sha256).digest()
|
||||
client_data = {"ts_sign": new_ts_sign, "req_content": "ticket,path,timestamp",
|
||||
"req_sign": base64.b64encode(req_sign).decode(), "timestamp": ts}
|
||||
return {
|
||||
"bd-ticket-guard-client-data": base64.b64encode(json.dumps(client_data).encode()).decode(),
|
||||
"bd-ticket-guard-ree-public-key": base64.b64encode(eph_pub_bytes).decode(),
|
||||
"bd-ticket-guard-version": "2",
|
||||
"bd-ticket-guard-web-version": "2",
|
||||
"bd-ticket-guard-web-sign-type": "1",
|
||||
}
|
||||
|
||||
|
||||
def _hmac_sha256(key: bytes, msg: str) -> bytes:
|
||||
return hmac.new(key, msg.encode(), hashlib.sha256).digest()
|
||||
|
||||
def _rand(n=11):
|
||||
return "".join(random.choices(string.ascii_lowercase + string.digits, k=n))
|
||||
|
||||
def aws4_sign(ak, sk, token, qs, method="GET", body=b""):
|
||||
now = datetime.datetime.now(datetime.timezone.utc)
|
||||
amz_date = now.strftime("%Y%m%dT%H%M%SZ")
|
||||
ds = now.strftime("%Y%m%d")
|
||||
region, service = "cn-north-1", "vod"
|
||||
body_hash = hashlib.sha256(body).hexdigest()
|
||||
if method == "POST":
|
||||
signed_headers = "content-type;x-amz-date;x-amz-security-token"
|
||||
header_str = f"content-type:text/plain;charset=UTF-8\nx-amz-date:{amz_date}\nx-amz-security-token:{token}\n"
|
||||
else:
|
||||
signed_headers = "x-amz-date;x-amz-security-token"
|
||||
header_str = f"x-amz-date:{amz_date}\nx-amz-security-token:{token}\n"
|
||||
canonical = f"{method}\n/\n{qs}\n{header_str}\n{signed_headers}\n{body_hash}"
|
||||
scope = f"{ds}/{region}/{service}/aws4_request"
|
||||
sts = f"AWS4-HMAC-SHA256\n{amz_date}\n{scope}\n{hashlib.sha256(canonical.encode()).hexdigest()}"
|
||||
k = _hmac_sha256(f"AWS4{sk}".encode(), ds)
|
||||
k = _hmac_sha256(k, region); k = _hmac_sha256(k, service); k = _hmac_sha256(k, "aws4_request")
|
||||
sig = hmac.new(k, sts.encode(), hashlib.sha256).hexdigest()
|
||||
return f"AWS4-HMAC-SHA256 Credential={ak}/{scope}, SignedHeaders={signed_headers}, Signature={sig}", amz_date, token
|
||||
|
||||
|
||||
async def do_login():
|
||||
"""弹窗扫码登录,返回刷新后的 SecurityKeys"""
|
||||
print("\n >>> 需要扫码登录,浏览器即将弹出 <<<")
|
||||
async with async_playwright() as pw:
|
||||
browser = await pw.chromium.launch(headless=False)
|
||||
ctx = await browser.new_context(
|
||||
user_agent=UA, viewport={"width": 1280, "height": 720})
|
||||
await ctx.add_init_script("Object.defineProperty(navigator,'webdriver',{get:()=>undefined});")
|
||||
page = await ctx.new_page()
|
||||
await page.goto("https://creator.douyin.com/", timeout=60000)
|
||||
await page.pause()
|
||||
await ctx.storage_state(path=str(COOKIE_FILE))
|
||||
await ctx.close()
|
||||
await browser.close()
|
||||
print(" >>> Cookie 已刷新 <<<\n")
|
||||
return SecurityKeys(COOKIE_FILE)
|
||||
|
||||
|
||||
async def check_cookie(keys):
|
||||
"""检查 Cookie 是否有效,返回 (valid, user_id, nickname)"""
|
||||
async with httpx.AsyncClient(timeout=10.0) as c:
|
||||
resp = await c.get(f"{BASE}/web/api/media/user/info/",
|
||||
headers={"Cookie": keys.cookie_str, "User-Agent": UA})
|
||||
data = resp.json()
|
||||
if data.get("status_code") != 0:
|
||||
return False, "", ""
|
||||
user = data.get("user") or data.get("user_info") or {}
|
||||
uid = str(user.get("uid", "") or user.get("user_id", ""))
|
||||
return True, uid, user.get("nickname", "unknown")
|
||||
|
||||
|
||||
async def publish_one_video(keys, client, video_path, title, timing_ts):
|
||||
"""上传+提交+发布 单条视频(全在一个 client 内快速完成)"""
|
||||
global USER_ID
|
||||
fsize = Path(video_path).stat().st_size
|
||||
|
||||
# 1) Auth
|
||||
resp = await client.get(f"{BASE}/web/api/media/upload/auth/v5/",
|
||||
headers={"Cookie": keys.cookie_str, "User-Agent": UA})
|
||||
data = resp.json()
|
||||
if data.get("status_code") != 0:
|
||||
return False, "auth失败"
|
||||
auth = json.loads(data["auth"])
|
||||
|
||||
# 2) Apply
|
||||
params = {"Action": "ApplyUploadInner", "FileSize": str(fsize), "FileType": "video",
|
||||
"IsInner": "1", "SpaceName": "aweme", "Version": "2020-11-19", "app_id": "2906", "s": _rand()}
|
||||
qs = "&".join(f"{k}={v}" for k, v in sorted(params.items()))
|
||||
authorization, amz_date, token = aws4_sign(auth["AccessKeyID"], auth["SecretAccessKey"], auth["SessionToken"], qs)
|
||||
resp = await client.get(f"{VOD_HOST}/?{qs}",
|
||||
headers={"authorization": authorization, "x-amz-date": amz_date,
|
||||
"x-amz-security-token": token, "User-Agent": UA})
|
||||
result = resp.json().get("Result", {})
|
||||
nodes = (result.get("InnerUploadAddress") or {}).get("UploadNodes", [])
|
||||
if not nodes:
|
||||
return False, "无UploadNodes"
|
||||
node = nodes[0]
|
||||
store = node["StoreInfos"][0]
|
||||
host, upload_id = node["UploadHost"], store["UploadID"]
|
||||
base_url = f"https://{host}/upload/v1/{store['StoreUri']}"
|
||||
auth_h = {"Authorization": store["Auth"], "User-Agent": UA}
|
||||
|
||||
# 3) Upload
|
||||
raw = Path(video_path).read_bytes()
|
||||
n_chunks = (len(raw) + CHUNK_SIZE - 1) // CHUNK_SIZE
|
||||
crc_parts = []
|
||||
for i in range(n_chunks):
|
||||
chunk = raw[i*CHUNK_SIZE : (i+1)*CHUNK_SIZE]
|
||||
crc = "%08x" % (zlib.crc32(chunk) & 0xFFFFFFFF)
|
||||
resp = await client.post(f"{base_url}?uploadid={upload_id}&part_number={i+1}&phase=transfer",
|
||||
content=chunk, headers={**auth_h, "Content-CRC32": crc, "Content-Type": "application/octet-stream"}, timeout=120.0)
|
||||
rd = resp.json() if resp.status_code == 200 else {}
|
||||
if rd.get("code") != 2000:
|
||||
return False, f"chunk {i+1} 失败"
|
||||
sv_crc = rd.get("data", {}).get("crc32", crc)
|
||||
crc_parts.append(f"{i+1}:{sv_crc}")
|
||||
print(f" chunk {i+1}/{n_chunks} ✓")
|
||||
finish_resp = await client.post(f"{base_url}?uploadid={upload_id}&phase=finish",
|
||||
content=",".join(crc_parts).encode(), headers={**auth_h, "Content-Type": "text/plain"}, timeout=60.0)
|
||||
fd = finish_resp.json() if finish_resp.status_code == 200 else {}
|
||||
if fd.get("code") != 2000:
|
||||
return False, f"finish: {fd.get('message','')}"
|
||||
|
||||
# 4) Commit
|
||||
qs2_params = {"Action": "CommitUploadInner", "SpaceName": "aweme",
|
||||
"Version": "2020-11-19", "app_id": "2906", "user_id": USER_ID}
|
||||
qs2 = "&".join(f"{k}={v}" for k, v in sorted(qs2_params.items()))
|
||||
body = json.dumps({"SessionKey": node["SessionKey"], "Functions": [{"Name": "GetMeta"}]}).encode("utf-8")
|
||||
auth2, amz2, tok2 = aws4_sign(auth["AccessKeyID"], auth["SecretAccessKey"], auth["SessionToken"], qs2, method="POST", body=body)
|
||||
resp = await client.post(f"{VOD_HOST}/?{qs2}", content=body,
|
||||
headers={"authorization": auth2, "x-amz-date": amz2, "x-amz-security-token": tok2,
|
||||
"content-type": "text/plain;charset=UTF-8", "User-Agent": UA}, timeout=30.0)
|
||||
cd = resp.json()
|
||||
results = cd.get("Result", {}).get("Results", [])
|
||||
video_id = results[0].get("Vid", "") if results else ""
|
||||
if not video_id:
|
||||
return False, f"commit失败: {cd.get('ResponseMetadata',{}).get('Error',{})}"
|
||||
print(f" video_id={video_id}")
|
||||
|
||||
# 5) create_v2
|
||||
path = "/web/api/media/aweme/create_v2/"
|
||||
creation_id = f"{_rand(8)}{int(time.time()*1000)}"
|
||||
body_json = {"item": {"common": {
|
||||
"text": title, "caption": title, "visibility_type": 0, "download": 1,
|
||||
"timing": timing_ts if timing_ts > 0 else 0, "creation_id": creation_id,
|
||||
"media_type": 4, "video_id": video_id, "music_source": 0, "music_id": None,
|
||||
}, "cover": {"poster": "", "poster_delay": 0}}}
|
||||
guard = keys.compute_ticket_guard(path)
|
||||
qp = {"read_aid": "2906", "cookie_enabled": "true", "aid": "1128"}
|
||||
if keys.ms_token:
|
||||
qp["msToken"] = keys.ms_token
|
||||
headers = {"Cookie": keys.cookie_str, "User-Agent": UA, "Content-Type": "application/json",
|
||||
"Accept": "application/json, text/plain, */*",
|
||||
"Referer": "https://creator.douyin.com/creator-micro/content/post/video",
|
||||
"Origin": "https://creator.douyin.com"}
|
||||
if keys.csrf_token:
|
||||
headers["x-secsdk-csrf-token"] = f"000100000001{keys.csrf_token[:32]}"
|
||||
headers.update(guard)
|
||||
resp = await client.post(f"{BASE}{path}?" + urlencode(qp), headers=headers, json=body_json, timeout=30.0)
|
||||
if not resp.text:
|
||||
return False, "create_v2 空响应(403)"
|
||||
r = resp.json()
|
||||
if r.get("status_code") == 0:
|
||||
return True, r.get("item_id", "")
|
||||
return False, f"status={r.get('status_code')}: {r.get('status_msg','')}"
|
||||
|
||||
|
||||
async def main():
|
||||
videos = sorted(VIDEO_DIR.glob("*.mp4"))
|
||||
if not videos:
|
||||
print("[✗] 未找到视频")
|
||||
return 1
|
||||
|
||||
print(f"[i] 共 {len(videos)} 条视频待发布\n")
|
||||
|
||||
keys = None
|
||||
if COOKIE_FILE.exists():
|
||||
keys = SecurityKeys(COOKIE_FILE)
|
||||
valid, uid, name = await check_cookie(keys)
|
||||
if valid:
|
||||
global USER_ID
|
||||
USER_ID = uid
|
||||
print(f"[✓] 已有有效 Cookie: {name} (uid={uid})")
|
||||
else:
|
||||
keys = None
|
||||
|
||||
if not keys:
|
||||
keys = await do_login()
|
||||
valid, uid, name = await check_cookie(keys)
|
||||
if not valid:
|
||||
print("[✗] 登录失败")
|
||||
return 1
|
||||
USER_ID = uid
|
||||
print(f"[✓] 登录成功: {name} (uid={uid})")
|
||||
|
||||
now_ts = int(time.time())
|
||||
base_ts = ((now_ts + 3600) // 3600 + 1) * 3600
|
||||
|
||||
results = []
|
||||
for i, vp in enumerate(videos):
|
||||
ts = base_ts + i * 3600
|
||||
title = TITLES.get(vp.name, f"{vp.stem} #Soul派对 #创业日记")
|
||||
fname = vp.name
|
||||
fsize = vp.stat().st_size
|
||||
dt_str = datetime.datetime.fromtimestamp(ts).strftime("%m-%d %H:%M")
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f" [{i+1}/{len(videos)}] {fname}")
|
||||
print(f" 大小: {fsize/1024/1024:.1f}MB | 定时: {dt_str}")
|
||||
print(f" 标题: {title[:60]}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
valid, _, _ = await check_cookie(keys)
|
||||
if not valid:
|
||||
print(" Cookie 过期,重新登录...")
|
||||
keys = await do_login()
|
||||
valid, uid, _ = await check_cookie(keys)
|
||||
if not valid:
|
||||
print(" [✗] 登录失败,跳过")
|
||||
results.append((fname, False, ts))
|
||||
continue
|
||||
USER_ID = uid
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=60.0, follow_redirects=True) as client:
|
||||
ok, msg = await publish_one_video(keys, client, str(vp), title, ts)
|
||||
if ok:
|
||||
print(f" [✓] 发布成功! item_id={msg}")
|
||||
else:
|
||||
print(f" [✗] 失败: {msg}")
|
||||
results.append((fname, ok, ts))
|
||||
except Exception as e:
|
||||
print(f" [✗] 异常: {e}")
|
||||
results.append((fname, False, ts))
|
||||
|
||||
if i < len(videos) - 1:
|
||||
await asyncio.sleep(2)
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(" 发布汇总")
|
||||
print(f"{'='*60}")
|
||||
for name, ok, ts in results:
|
||||
s = "✓" if ok else "✗"
|
||||
t = datetime.datetime.fromtimestamp(ts).strftime("%m-%d %H:%M")
|
||||
print(f" [{s}] {t} | {name}")
|
||||
success = sum(1 for _, ok, _ in results if ok)
|
||||
print(f"\n 成功: {success}/{len(results)}")
|
||||
return 0 if success == len(results) else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(asyncio.run(main()))
|
||||
421
03_卡木(木)/木叶_视频内容/抖音发布/脚本/douyin_live_publish.py
Normal file
421
03_卡木(木)/木叶_视频内容/抖音发布/脚本/douyin_live_publish.py
Normal file
@@ -0,0 +1,421 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
抖音视频发布 v3 - 浏览器保活 + 实时Cookie刷新 + 纯API上传
|
||||
核心改进:每条视频发布前,浏览器主动导航刷新 Cookie,解决短 TTL 过期问题。
|
||||
用户只需扫码一次,全部视频自动发布。
|
||||
"""
|
||||
import asyncio
|
||||
import base64
|
||||
import datetime
|
||||
import hashlib
|
||||
import hmac
|
||||
import json
|
||||
import random
|
||||
import string
|
||||
import sys
|
||||
import time
|
||||
import zlib
|
||||
from pathlib import Path
|
||||
from urllib.parse import urlencode
|
||||
|
||||
import httpx
|
||||
from cryptography.hazmat.primitives.asymmetric import ec
|
||||
from cryptography.hazmat.primitives import serialization
|
||||
from cryptography import x509
|
||||
from playwright.async_api import async_playwright
|
||||
|
||||
# ─── 配置 ───
|
||||
VIDEO_DIR = Path(sys.argv[1]) if len(sys.argv) > 1 else Path(
|
||||
"/Users/karuo/Movies/7607519346462286491_纳瓦尔3小时访谈 "
|
||||
"纳瓦尔_拉维坎特_经典三小时访谈_output/clips_enhanced"
|
||||
)
|
||||
BASE = "https://creator.douyin.com"
|
||||
VOD_HOST = "https://vod.bytedanceapi.com"
|
||||
CHUNK_SIZE = 3 * 1024 * 1024
|
||||
UA = (
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36"
|
||||
)
|
||||
MIN_TIMING_HOURS = 2.5 # 定时发布 ≥ 2小时(留 0.5h 余量)
|
||||
|
||||
|
||||
# ─── 浏览器 Cookie 刷新 ───
|
||||
async def refresh_cookies(page):
|
||||
"""让浏览器导航到创作者中心首页,触发服务端 Set-Cookie 刷新"""
|
||||
try:
|
||||
await page.goto(
|
||||
"https://creator.douyin.com/creator-micro/home",
|
||||
wait_until="domcontentloaded", timeout=15000,
|
||||
)
|
||||
await asyncio.sleep(2)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
async def extract_keys(context):
|
||||
"""从 Playwright BrowserContext 实时提取 Cookie + localStorage 安全密钥"""
|
||||
cookies_raw = await context.cookies()
|
||||
cookies = {}
|
||||
for c in cookies_raw:
|
||||
if "douyin.com" in c.get("domain", ""):
|
||||
cookies[c["name"]] = c["value"]
|
||||
cookie_str = "; ".join(f"{k}={v}" for k, v in cookies.items())
|
||||
csrf_token = cookies.get("passport_csrf_token", "")
|
||||
ms_token = cookies.get("msToken", "")
|
||||
|
||||
page = context.pages[0] if context.pages else None
|
||||
ec_private_key = None
|
||||
server_public_key = None
|
||||
ticket = ""
|
||||
ts_sign_raw = ""
|
||||
|
||||
if page:
|
||||
try:
|
||||
ls_data = await page.evaluate("""() => {
|
||||
const r = {};
|
||||
for (let i = 0; i < localStorage.length; i++) {
|
||||
const k = localStorage.key(i);
|
||||
if (k.includes('s_sdk_') || k === 'xmst') r[k] = localStorage.getItem(k);
|
||||
}
|
||||
return r;
|
||||
}""")
|
||||
except Exception:
|
||||
ls_data = {}
|
||||
|
||||
for name, val in ls_data.items():
|
||||
try:
|
||||
if "s_sdk_crypt_sdk" in name:
|
||||
d = json.loads(json.loads(val)["data"])
|
||||
pem = d["ec_privateKey"].replace("\\r\\n", "\n")
|
||||
ec_private_key = serialization.load_pem_private_key(pem.encode(), password=None)
|
||||
elif "s_sdk_server_cert_key" in name:
|
||||
cert_pem = json.loads(val)["cert"]
|
||||
cert = x509.load_pem_x509_certificate(cert_pem.encode())
|
||||
server_public_key = cert.public_key()
|
||||
elif "s_sdk_sign_data_key" in name and "web_protect" in name:
|
||||
d = json.loads(json.loads(val)["data"])
|
||||
ticket = d["ticket"]
|
||||
ts_sign_raw = d["ts_sign"]
|
||||
except Exception:
|
||||
pass
|
||||
if not ms_token:
|
||||
ms_token = ls_data.get("xmst", "")
|
||||
|
||||
return {
|
||||
"cookies": cookies, "cookie_str": cookie_str,
|
||||
"ms_token": ms_token, "csrf_token": csrf_token,
|
||||
"ec_private_key": ec_private_key, "server_public_key": server_public_key,
|
||||
"ticket": ticket, "ts_sign_raw": ts_sign_raw,
|
||||
}
|
||||
|
||||
|
||||
# ─── bd-ticket-guard 签名 ───
|
||||
def compute_guard(keys, path):
|
||||
pk, spk = keys["ec_private_key"], keys["server_public_key"]
|
||||
if not pk or not spk:
|
||||
return {}
|
||||
eph = ec.generate_private_key(ec.SECP256R1())
|
||||
eph_pub = eph.public_key().public_bytes(
|
||||
serialization.Encoding.X962, serialization.PublicFormat.UncompressedPoint)
|
||||
shared = eph.exchange(ec.ECDH(), spk)
|
||||
ts = int(time.time())
|
||||
ts_hex = keys["ts_sign_raw"].replace("ts.2.", "")
|
||||
tb = bytes.fromhex(ts_hex)
|
||||
new_first = bytes(a ^ b for a, b in zip(tb[:32], shared[:32]))
|
||||
new_ts = "ts.2." + (new_first + tb[32:]).hex()
|
||||
msg = f'{keys["ticket"]},{path},{ts}'
|
||||
sig = hmac.new(shared, msg.encode(), hashlib.sha256).digest()
|
||||
cd = {"ts_sign": new_ts, "req_content": "ticket,path,timestamp",
|
||||
"req_sign": base64.b64encode(sig).decode(), "timestamp": ts}
|
||||
return {
|
||||
"bd-ticket-guard-client-data": base64.b64encode(json.dumps(cd).encode()).decode(),
|
||||
"bd-ticket-guard-ree-public-key": base64.b64encode(eph_pub).decode(),
|
||||
"bd-ticket-guard-version": "2", "bd-ticket-guard-web-version": "2",
|
||||
"bd-ticket-guard-web-sign-type": "1",
|
||||
}
|
||||
|
||||
|
||||
# ─── AWS4 签名 ───
|
||||
def _hm(key, msg):
|
||||
return hmac.new(key, msg.encode(), hashlib.sha256).digest()
|
||||
|
||||
def _rand(n=11):
|
||||
return "".join(random.choices(string.ascii_lowercase + string.digits, k=n))
|
||||
|
||||
def aws4(ak, sk, token, qs, method="GET", body=b""):
|
||||
now = datetime.datetime.now(datetime.timezone.utc)
|
||||
ad = now.strftime("%Y%m%dT%H%M%SZ")
|
||||
ds = now.strftime("%Y%m%d")
|
||||
rg, sv = "cn-north-1", "vod"
|
||||
bh = hashlib.sha256(body).hexdigest()
|
||||
if method == "POST":
|
||||
sh = "content-type;x-amz-date;x-amz-security-token"
|
||||
hs = f"content-type:text/plain;charset=UTF-8\nx-amz-date:{ad}\nx-amz-security-token:{token}\n"
|
||||
else:
|
||||
sh = "x-amz-date;x-amz-security-token"
|
||||
hs = f"x-amz-date:{ad}\nx-amz-security-token:{token}\n"
|
||||
can = f"{method}\n/\n{qs}\n{hs}\n{sh}\n{bh}"
|
||||
scope = f"{ds}/{rg}/{sv}/aws4_request"
|
||||
sts = f"AWS4-HMAC-SHA256\n{ad}\n{scope}\n{hashlib.sha256(can.encode()).hexdigest()}"
|
||||
k = _hm(f"AWS4{sk}".encode(), ds)
|
||||
k = _hm(k, rg); k = _hm(k, sv); k = _hm(k, "aws4_request")
|
||||
sig = hmac.new(k, sts.encode(), hashlib.sha256).hexdigest()
|
||||
return f"AWS4-HMAC-SHA256 Credential={ak}/{scope}, SignedHeaders={sh}, Signature={sig}", ad, token
|
||||
|
||||
|
||||
# ─── 标题生成 ───
|
||||
def make_title(highlights, idx, filename):
|
||||
if idx < len(highlights):
|
||||
h = highlights[idx]
|
||||
excerpt = h.get("transcript_excerpt", "").strip()
|
||||
if len(excerpt) > 80:
|
||||
excerpt = excerpt[:77] + "..."
|
||||
return f"纳瓦尔·拉维坎特:{excerpt} #纳瓦尔 #人生智慧 #认知升级 #财富自由"
|
||||
stem = Path(filename).stem.replace("_enhanced", "")
|
||||
return f"纳瓦尔3小时访谈精华|{stem} #纳瓦尔 #认知升级 #人生智慧"
|
||||
|
||||
|
||||
# ─── 单条视频发布 ───
|
||||
async def publish_one(context, client, video_path, title, timing_ts, idx, total):
|
||||
fname = Path(video_path).name
|
||||
fsize = Path(video_path).stat().st_size
|
||||
dt_str = datetime.datetime.fromtimestamp(timing_ts).strftime("%m-%d %H:%M") if timing_ts > 0 else "立即"
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f" [{idx}/{total}] {fname}")
|
||||
print(f" {fsize/1024/1024:.1f}MB | 定时: {dt_str}")
|
||||
print(f" {title[:70]}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
# ★ 每条视频前刷新 Cookie
|
||||
page = context.pages[0] if context.pages else None
|
||||
if page:
|
||||
await refresh_cookies(page)
|
||||
|
||||
keys = await extract_keys(context)
|
||||
if not keys["cookie_str"]:
|
||||
return False, "无Cookie"
|
||||
|
||||
h = {"Cookie": keys["cookie_str"], "User-Agent": UA}
|
||||
|
||||
# 验证 Cookie
|
||||
resp = await client.get(f"{BASE}/web/api/media/user/info/", headers=h)
|
||||
data = resp.json()
|
||||
if data.get("status_code") != 0:
|
||||
return False, f"Cookie无效: {data.get('status_msg','')}"
|
||||
user = data.get("user") or data.get("user_info") or {}
|
||||
user_id = str(user.get("uid", "") or user.get("user_id", ""))
|
||||
|
||||
# 1) Auth
|
||||
resp = await client.get(f"{BASE}/web/api/media/upload/auth/v5/", headers=h)
|
||||
ad = resp.json()
|
||||
if ad.get("status_code") != 0:
|
||||
return False, f"auth: {ad.get('status_msg','')}"
|
||||
auth = json.loads(ad["auth"])
|
||||
print(f" [1] Auth ✓")
|
||||
|
||||
# 2) Apply
|
||||
params = {"Action": "ApplyUploadInner", "FileSize": str(fsize), "FileType": "video",
|
||||
"IsInner": "1", "SpaceName": "aweme", "Version": "2020-11-19",
|
||||
"app_id": "2906", "s": _rand()}
|
||||
qs = "&".join(f"{k}={v}" for k, v in sorted(params.items()))
|
||||
az, ad2, tk = aws4(auth["AccessKeyID"], auth["SecretAccessKey"], auth["SessionToken"], qs)
|
||||
resp = await client.get(f"{VOD_HOST}/?{qs}",
|
||||
headers={"authorization": az, "x-amz-date": ad2, "x-amz-security-token": tk, "User-Agent": UA})
|
||||
result = resp.json().get("Result", {})
|
||||
nodes = (result.get("InnerUploadAddress") or {}).get("UploadNodes", [])
|
||||
if not nodes:
|
||||
return False, "无UploadNodes"
|
||||
node = nodes[0]
|
||||
store = node["StoreInfos"][0]
|
||||
host, uid2 = node["UploadHost"], store["UploadID"]
|
||||
base_url = f"https://{host}/upload/v1/{store['StoreUri']}"
|
||||
ah = {"Authorization": store["Auth"], "User-Agent": UA}
|
||||
print(f" [2] Apply ✓ vid={node['Vid'][:25]}...")
|
||||
|
||||
# 3) Upload chunks
|
||||
raw = Path(video_path).read_bytes()
|
||||
nc = (len(raw) + CHUNK_SIZE - 1) // CHUNK_SIZE
|
||||
crc_parts = []
|
||||
for i in range(nc):
|
||||
chunk = raw[i*CHUNK_SIZE:(i+1)*CHUNK_SIZE]
|
||||
crc = "%08x" % (zlib.crc32(chunk) & 0xFFFFFFFF)
|
||||
r = await client.post(
|
||||
f"{base_url}?uploadid={uid2}&part_number={i+1}&phase=transfer",
|
||||
content=chunk, headers={**ah, "Content-CRC32": crc,
|
||||
"Content-Type": "application/octet-stream"}, timeout=120.0)
|
||||
rd = r.json() if r.status_code == 200 else {}
|
||||
if rd.get("code") != 2000:
|
||||
return False, f"chunk{i+1}: {rd}"
|
||||
sv_crc = rd.get("data", {}).get("crc32", crc)
|
||||
crc_parts.append(f"{i+1}:{sv_crc}")
|
||||
fr = await client.post(f"{base_url}?uploadid={uid2}&phase=finish",
|
||||
content=",".join(crc_parts).encode(),
|
||||
headers={**ah, "Content-Type": "text/plain"}, timeout=60.0)
|
||||
fd = fr.json() if fr.status_code == 200 else {}
|
||||
if fd.get("code") != 2000:
|
||||
return False, f"finish: {fd.get('message','')}"
|
||||
print(f" [3] Upload ✓ ({nc} chunks)")
|
||||
|
||||
# 4) Commit
|
||||
qs2 = "&".join(f"{k}={v}" for k, v in sorted({
|
||||
"Action": "CommitUploadInner", "SpaceName": "aweme",
|
||||
"Version": "2020-11-19", "app_id": "2906", "user_id": user_id}.items()))
|
||||
body = json.dumps({"SessionKey": node["SessionKey"],
|
||||
"Functions": [{"Name": "GetMeta"}]}).encode("utf-8")
|
||||
a2, d2, t2 = aws4(auth["AccessKeyID"], auth["SecretAccessKey"],
|
||||
auth["SessionToken"], qs2, method="POST", body=body)
|
||||
cr = await client.post(f"{VOD_HOST}/?{qs2}", content=body,
|
||||
headers={"authorization": a2, "x-amz-date": d2, "x-amz-security-token": t2,
|
||||
"content-type": "text/plain;charset=UTF-8", "User-Agent": UA}, timeout=30.0)
|
||||
cd = cr.json()
|
||||
results = cd.get("Result", {}).get("Results", [])
|
||||
video_id = results[0].get("Vid", "") if results else ""
|
||||
if not video_id:
|
||||
return False, f"commit: {cd}"
|
||||
print(f" [4] Commit ✓ video_id={video_id[:25]}...")
|
||||
|
||||
# 5) create_v2 - 再次刷新 Cookie(发布是最关键的一步)
|
||||
if page:
|
||||
await refresh_cookies(page)
|
||||
keys2 = await extract_keys(context)
|
||||
|
||||
path = "/web/api/media/aweme/create_v2/"
|
||||
cid = f"{_rand(8)}{int(time.time()*1000)}"
|
||||
bj = {"item": {"common": {
|
||||
"text": title, "caption": title, "visibility_type": 0, "download": 1,
|
||||
"timing": timing_ts if timing_ts > 0 else 0, "creation_id": cid,
|
||||
"media_type": 4, "video_id": video_id, "music_source": 0, "music_id": None,
|
||||
}, "cover": {"poster": "", "poster_delay": 0}}}
|
||||
guard = compute_guard(keys2, path)
|
||||
qp = {"read_aid": "2906", "cookie_enabled": "true", "aid": "1128"}
|
||||
if keys2["ms_token"]:
|
||||
qp["msToken"] = keys2["ms_token"]
|
||||
headers = {
|
||||
"Cookie": keys2["cookie_str"], "User-Agent": UA,
|
||||
"Content-Type": "application/json",
|
||||
"Accept": "application/json, text/plain, */*",
|
||||
"Referer": "https://creator.douyin.com/creator-micro/content/post/video",
|
||||
"Origin": "https://creator.douyin.com",
|
||||
}
|
||||
if keys2["csrf_token"]:
|
||||
headers["x-secsdk-csrf-token"] = f"000100000001{keys2['csrf_token'][:32]}"
|
||||
headers.update(guard)
|
||||
resp = await client.post(f"{BASE}{path}?" + urlencode(qp),
|
||||
headers=headers, json=bj, timeout=30.0)
|
||||
if not resp.text:
|
||||
return False, f"create_v2 空响应(HTTP {resp.status_code})"
|
||||
r = resp.json()
|
||||
if r.get("status_code") == 0:
|
||||
return True, r.get("item_id", "")
|
||||
return False, f"status={r.get('status_code')}: {r.get('status_msg','')}"
|
||||
|
||||
|
||||
# ─── 主流程 ───
|
||||
async def main():
|
||||
videos = sorted(VIDEO_DIR.glob("*.mp4"))
|
||||
if not videos:
|
||||
print(f"[✗] 未找到 mp4: {VIDEO_DIR}")
|
||||
return 1
|
||||
print(f"[i] 目录: {VIDEO_DIR}")
|
||||
print(f"[i] 共 {len(videos)} 条视频\n")
|
||||
|
||||
highlights_file = VIDEO_DIR.parent / "highlights.json"
|
||||
highlights = []
|
||||
if highlights_file.exists():
|
||||
highlights = json.loads(highlights_file.read_text())
|
||||
print(f"[i] highlights.json: {len(highlights)} 条\n")
|
||||
|
||||
print("=" * 60)
|
||||
print(" 请用【未封禁】的抖音号扫码登录")
|
||||
print(" 如果之前的账号被封禁,请换一个号")
|
||||
print(" 登录后点 Playwright Inspector 绿色 ▶")
|
||||
print("=" * 60 + "\n")
|
||||
|
||||
async with async_playwright() as pw:
|
||||
browser = await pw.chromium.launch(headless=False)
|
||||
context = await browser.new_context(
|
||||
user_agent=UA, viewport={"width": 1280, "height": 720})
|
||||
await context.add_init_script(
|
||||
"Object.defineProperty(navigator,'webdriver',{get:()=>undefined});")
|
||||
page = await context.new_page()
|
||||
await page.goto("https://creator.douyin.com/", timeout=60000)
|
||||
await page.pause()
|
||||
|
||||
# 验证登录 + 检查发布权限
|
||||
keys = await extract_keys(context)
|
||||
async with httpx.AsyncClient(timeout=10.0) as c:
|
||||
resp = await c.get(f"{BASE}/web/api/media/user/info/",
|
||||
headers={"Cookie": keys["cookie_str"], "User-Agent": UA})
|
||||
data = resp.json()
|
||||
if data.get("status_code") != 0:
|
||||
print("[✗] 登录失败")
|
||||
await browser.close()
|
||||
return 1
|
||||
user = data.get("user") or data.get("user_info") or {}
|
||||
nickname = user.get("nickname", "?")
|
||||
uid = user.get("uid", "")
|
||||
print(f"\n[✓] 账号: {nickname} (uid={uid})")
|
||||
|
||||
# 快速试发一条(不定时),检测账号是否被封
|
||||
print("[i] 检查发布权限...")
|
||||
keys_test = await extract_keys(context)
|
||||
async with httpx.AsyncClient(timeout=30.0, follow_redirects=True) as tc:
|
||||
tr = await tc.get(f"{BASE}/web/api/media/upload/auth/v5/",
|
||||
headers={"Cookie": keys_test["cookie_str"], "User-Agent": UA})
|
||||
td = tr.json()
|
||||
if td.get("status_code") != 0:
|
||||
print(f"[✗] 权限检查失败: {td.get('status_msg','')}")
|
||||
await browser.close()
|
||||
return 1
|
||||
|
||||
# 计算定时:从现在起 ≥ 2.5 小时,每小时一条
|
||||
now_ts = int(time.time())
|
||||
base_ts = now_ts + int(MIN_TIMING_HOURS * 3600)
|
||||
base_ts = ((base_ts + 3599) // 3600) * 3600 # 对齐到整点
|
||||
|
||||
print(f"[i] 定时发布从 {datetime.datetime.fromtimestamp(base_ts).strftime('%m-%d %H:%M')} 开始")
|
||||
print(f"[i] 浏览器保持运行,每条发布前自动刷新 Cookie\n")
|
||||
|
||||
results = []
|
||||
async with httpx.AsyncClient(timeout=60.0, follow_redirects=True) as client:
|
||||
for i, vp in enumerate(videos):
|
||||
ts = base_ts + i * 3600
|
||||
title = make_title(highlights, i, vp.name)
|
||||
try:
|
||||
ok, msg = await publish_one(
|
||||
context, client, str(vp), title, ts, i+1, len(videos))
|
||||
if ok:
|
||||
print(f" [✓] 发布成功! item_id={msg}")
|
||||
else:
|
||||
print(f" [✗] {msg}")
|
||||
if "封禁" in str(msg) or "未登录" in str(msg):
|
||||
print(f"\n !! 严重错误,停止发布")
|
||||
results.append((vp.name, False, ts, msg))
|
||||
break
|
||||
results.append((vp.name, ok, ts, msg if not ok else ""))
|
||||
except Exception as e:
|
||||
print(f" [✗] 异常: {e}")
|
||||
results.append((vp.name, False, ts, str(e)))
|
||||
|
||||
if i < len(videos) - 1:
|
||||
await asyncio.sleep(2)
|
||||
|
||||
# 汇总
|
||||
print(f"\n{'='*60}")
|
||||
print(" 发布汇总")
|
||||
print(f"{'='*60}")
|
||||
for name, ok, ts, msg in results:
|
||||
s = "✓" if ok else "✗"
|
||||
t = datetime.datetime.fromtimestamp(ts).strftime("%m-%d %H:%M")
|
||||
extra = f" | {msg[:35]}" if msg else ""
|
||||
print(f" [{s}] {t} | {name[:40]}{extra}")
|
||||
success = sum(1 for _, ok, _, _ in results if ok)
|
||||
print(f"\n 成功: {success}/{len(results)}")
|
||||
|
||||
await browser.close()
|
||||
return 0 if success == len(results) else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(asyncio.run(main()))
|
||||
@@ -1,26 +1,29 @@
|
||||
#!/usr/bin/env python3
|
||||
"""获取抖音 Cookie - 弹窗浏览器 → 扫码登录 → 保存 storage_state"""
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
WANTUI = Path("/Users/karuo/Documents/开发/3、自营项目/万推/backend")
|
||||
sys.path.insert(0, str(WANTUI))
|
||||
|
||||
from playwright.async_api import async_playwright
|
||||
from utils.base_social_media import set_init_script
|
||||
|
||||
COOKIE_FILE = Path(__file__).parent / "douyin_storage_state.json"
|
||||
|
||||
|
||||
async def main():
|
||||
print("即将弹出浏览器,请扫码登录抖音创作者中心。")
|
||||
print("即将弹出浏览器,请用新抖音号扫码登录。")
|
||||
print("登录成功后,在 Playwright Inspector 窗口中点击绿色 ▶ 按钮。\n")
|
||||
|
||||
async with async_playwright() as pw:
|
||||
browser = await pw.chromium.launch(headless=False)
|
||||
context = await browser.new_context()
|
||||
context = await set_init_script(context)
|
||||
context = await browser.new_context(
|
||||
user_agent=(
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) "
|
||||
"Chrome/143.0.0.0 Safari/537.36"
|
||||
),
|
||||
viewport={"width": 1280, "height": 720},
|
||||
)
|
||||
await context.add_init_script("""
|
||||
Object.defineProperty(navigator, 'webdriver', { get: () => undefined });
|
||||
""")
|
||||
page = await context.new_page()
|
||||
await page.goto("https://creator.douyin.com/", timeout=60000)
|
||||
await page.pause()
|
||||
@@ -30,7 +33,7 @@ async def main():
|
||||
|
||||
print(f"\n[✓] Cookie 已保存到: {COOKIE_FILE}")
|
||||
print(f" 文件大小: {COOKIE_FILE.stat().st_size} bytes")
|
||||
print("现在可以运行 douyin_batch_publish.py 批量发布了。")
|
||||
print("现在可以运行 douyin_pure_api.py 批量发布了。")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -49,35 +49,35 @@ UA = (
|
||||
|
||||
TITLES = {
|
||||
"早起不是为了开派对,是不吵老婆睡觉.mp4":
|
||||
"早起不是为了开派对,是不吵老婆睡觉。初衷就这一个。#Soul派对 #创业日记 #晨间直播 #私域干货",
|
||||
"每天6点起床不是因为自律,是因为老婆还在睡。创业人最真实的起床理由,你猜到了吗?#Soul派对 #创业日记 #晨间直播 #真实创业",
|
||||
"懒人的活法 动作简单有利可图正反馈.mp4":
|
||||
"懒有懒的活法:动作简单、有利可图、正反馈,就能坐得住。#Soul派对 #副业 #私域 #切片变现",
|
||||
"懒人也能赚钱?关键就三个词:动作简单、有利可图、正反馈。90%的人输在太勤快了 #Soul派对 #副业思维 #私域变现 #认知升级",
|
||||
"初期团队先找两个IS,比钱好使 ENFJ链接人,ENTJ指挥.mp4":
|
||||
"初期团队先找两个IS,比钱好使。ENFJ链接人,ENTJ指挥。#MBTI #创业团队 #Soul派对",
|
||||
"创业初期别急着找钱,先找两个IS型人格。ENFJ负责链接,ENTJ负责指挥,比融资好使十倍 #MBTI创业 #团队搭建 #Soul派对 #合伙人",
|
||||
"ICU出来一年多 活着要在互联网上留下东西.mp4":
|
||||
"ICU出来一年多,活着要在互联网上留下东西。#人生感悟 #创业 #Soul派对 #记录生活",
|
||||
"ICU出来一年多了。那之后我想明白一件事:活着,就要在互联网上留下点东西 #人生感悟 #创业觉醒 #Soul派对 #向死而生",
|
||||
"MBTI疗愈SOUL 年轻人测MBTI,40到60岁走五行八卦.mp4":
|
||||
"年轻人测MBTI,40到60岁走五行八卦。#MBTI #Soul派对 #五行 #疗愈",
|
||||
"20岁测MBTI,40岁以后该学五行八卦了。年轻人用性格分类,中年人靠命理运营自己 #MBTI #五行 #Soul派对 #认知觉醒",
|
||||
"Soul业务模型 派对+切片+小程序全链路.mp4":
|
||||
"Soul业务模型:派对+切片+小程序全链路。#Soul派对 #商业模式 #私域运营 #小程序",
|
||||
"一个人怎么跑通一条商业链路?派对获客→AI切片→小程序变现,全链路拆给你看 #Soul派对 #商业模式 #全链路 #一人公司",
|
||||
"Soul切片30秒到8分钟 AI半小时能剪10到30个.mp4":
|
||||
"Soul切片30秒到8分钟,AI半小时能剪10到30个。#AI剪辑 #Soul派对 #切片变现 #效率工具",
|
||||
"AI剪辑有多快?30秒到8分钟的切片,半小时出10到30条。内容工厂的效率密码 #AI剪辑 #Soul派对 #内容效率 #批量生产",
|
||||
"刷牙听业务逻辑 Soul切片变现怎么跑.mp4":
|
||||
"刷牙听业务逻辑:Soul切片变现怎么跑。#Soul派对 #切片变现 #副业 #商业逻辑",
|
||||
"刷牙3分钟,刚好听完一套变现逻辑。Soul切片怎么从0到日产30条?碎片时间才是生产力 #Soul派对 #碎片创业 #副业逻辑 #效率",
|
||||
"国学易经怎么学 两小时七七八八,召唤作者对话.mp4":
|
||||
"国学易经怎么学?两小时七七八八,召唤作者对话。#国学 #易经 #Soul派对 #学习方法",
|
||||
"易经其实不难,两小时就能学个七七八八。关键是找到作者的思维频率,跟古人对话 #国学 #易经入门 #Soul派对 #终身学习",
|
||||
"广点通能投Soul了,1000曝光6到10块.mp4":
|
||||
"广点通能投Soul了,1000曝光6到10块。#Soul派对 #广点通 #流量投放 #私域获客",
|
||||
"广点通终于能投Soul了!1000次曝光只要6到10块,这个获客成本你敢信?#Soul派对 #广点通投放 #低成本获客 #流量红利",
|
||||
"建立信任不是求来的 卖外挂发邮件三个月拿下德国总代.mp4":
|
||||
"建立信任不是求来的。卖外挂发邮件三个月拿下德国总代。#销售 #信任 #Soul派对 #商业故事",
|
||||
"信任不是求来的。一个卖外挂的小伙子,发了三个月邮件,拿下德国总代理。死磕比社交有用 #销售思维 #信任建立 #Soul派对 #死磕精神",
|
||||
"核心就两个字 筛选。能开派对坚持7天的人再谈.mp4":
|
||||
"核心就两个字:筛选。能开派对坚持7天的人再谈。#筛选 #Soul派对 #创业 #坚持",
|
||||
"别跟所有人合作,核心就两个字:筛选。能坚持开7天派对的人,才值得深聊 #筛选思维 #Soul派对 #创业认知 #人性",
|
||||
"睡眠不好?每天放下一件事,做减法.mp4":
|
||||
"睡眠不好?每天放下一件事,做减法。#睡眠 #减法 #Soul派对 #生活方式",
|
||||
"睡不好不是因为太累,是因为脑子里装太多。每天放下一件事,做减法,睡眠自然好 #睡眠 #做减法 #Soul派对 #心理健康",
|
||||
"这套体系花了170万,但前端几十块就能参与.mp4":
|
||||
"这套体系花了170万,但前端几十块就能参与。#商业体系 #Soul派对 #私域 #低成本创业",
|
||||
"后端花了170万搭的体系,前端几十块就能参与。真正的商业模式是让别人低成本上车 #商业认知 #Soul派对 #低门槛创业 #体系思维",
|
||||
"金融AI获客体系 后端30人沉淀12年,前端丢手机.mp4":
|
||||
"金融AI获客体系:后端30人沉淀12年,前端丢手机。#AI获客 #金融 #Soul派对 #商业模式",
|
||||
"后端30人沉淀了12年,前端操作就是丢个手机号。金融AI获客体系,把复杂留给自己 #AI获客 #金融科技 #Soul派对 #系统思维",
|
||||
}
|
||||
|
||||
|
||||
@@ -184,7 +184,7 @@ def _hmac_sha256(key: bytes, msg: str) -> bytes:
|
||||
return hmac.new(key, msg.encode(), hashlib.sha256).digest()
|
||||
|
||||
|
||||
USER_ID = "95519194897"
|
||||
USER_ID = "" # 自动从 user_info 接口获取
|
||||
|
||||
|
||||
def aws4_sign(ak: str, sk: str, token: str, qs: str,
|
||||
@@ -340,7 +340,7 @@ async def upload_chunks(
|
||||
crc_parts.append(f"{i+1}:{sv_crc}")
|
||||
print(f" chunk {i+1}/{n_chunks} ok (crc32={sv_crc})")
|
||||
|
||||
finish_body = "\n".join(crc_parts).encode()
|
||||
finish_body = ",".join(crc_parts).encode()
|
||||
finish_resp = await client.post(
|
||||
f"{base_url}?uploadid={upload_id}&phase=finish",
|
||||
content=finish_body,
|
||||
@@ -539,13 +539,13 @@ async def create_v2(
|
||||
# 单视频发布
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
async def publish_one(
|
||||
keys: SecurityKeys,
|
||||
video_path: str,
|
||||
title: str,
|
||||
timing_ts: int = 0,
|
||||
idx: int = 1,
|
||||
total: int = 1,
|
||||
) -> bool:
|
||||
global USER_ID
|
||||
fname = Path(video_path).name
|
||||
fsize = Path(video_path).stat().st_size
|
||||
timing_str = datetime.datetime.fromtimestamp(timing_ts).strftime("%m-%d %H:%M") if timing_ts > 0 else "立即"
|
||||
@@ -556,25 +556,40 @@ async def publish_one(
|
||||
print(f" 标题: {title[:60]}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
async with httpx.AsyncClient(timeout=60.0, follow_redirects=True) as client:
|
||||
auth = await get_upload_auth(client, keys)
|
||||
info = await apply_upload(client, auth, fsize)
|
||||
if not await upload_chunks(client, info, video_path):
|
||||
print(" [✗] 上传失败")
|
||||
return False
|
||||
video_id = await commit_upload(client, auth, info["session_key"])
|
||||
if not video_id:
|
||||
print(" [✗] 未获取到 video_id")
|
||||
return False
|
||||
await wait_video_ready(client, keys, video_id)
|
||||
result = await create_v2(client, keys, video_id, title, timing_ts)
|
||||
try:
|
||||
keys = SecurityKeys(COOKIE_FILE)
|
||||
async with httpx.AsyncClient(timeout=60.0, follow_redirects=True) as client:
|
||||
resp = await client.get(
|
||||
USER_INFO_URL, headers={"Cookie": keys.cookie_str, "User-Agent": UA}
|
||||
)
|
||||
uid_data = resp.json()
|
||||
if uid_data.get("status_code") != 0:
|
||||
print(f" [✗] Cookie 已过期,请重新运行 douyin_login.py")
|
||||
return False
|
||||
user = uid_data.get("user") or uid_data.get("user_info") or {}
|
||||
USER_ID = str(user.get("uid", "") or user.get("user_id", ""))
|
||||
|
||||
if result.get("status_code") == 0:
|
||||
print(" [✓] 发布成功!")
|
||||
return True
|
||||
else:
|
||||
print(f" [✗] 发布失败: {result}")
|
||||
return False
|
||||
auth = await get_upload_auth(client, keys)
|
||||
info = await apply_upload(client, auth, fsize)
|
||||
if not await upload_chunks(client, info, video_path):
|
||||
print(" [✗] 上传失败")
|
||||
return False
|
||||
video_id = await commit_upload(client, auth, info["session_key"])
|
||||
if not video_id:
|
||||
print(" [✗] 未获取到 video_id")
|
||||
return False
|
||||
result = await create_v2(client, keys, video_id, title, timing_ts)
|
||||
|
||||
if result.get("status_code") == 0:
|
||||
item_id = result.get("item_id", "")
|
||||
print(f" [✓] 发布成功! item_id={item_id}")
|
||||
return True
|
||||
else:
|
||||
print(f" [✗] 发布失败: {result}")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f" [✗] 异常: {e}")
|
||||
return False
|
||||
|
||||
|
||||
# ═══════════════════════════════════════════════════════════
|
||||
@@ -587,13 +602,11 @@ async def main():
|
||||
|
||||
keys = SecurityKeys(COOKIE_FILE)
|
||||
print(f"[✓] Cookie 加载 ({len(keys.cookies)} items)")
|
||||
print(f" msToken: {'✓' if keys.ms_token else '✗'}")
|
||||
print(f" ec_privateKey: {'✓' if keys.ec_private_key else '✗'}")
|
||||
print(f" server_public_key: {'✓' if keys.server_public_key else '✗'}")
|
||||
print(f" ticket: {'✓' if keys.ticket else '✗'}")
|
||||
print(f" ts_sign: {'✓' if keys.ts_sign_raw else '✗'}")
|
||||
print(f" csrf_token: {'✓' if keys.csrf_token else '✗'}")
|
||||
for k in ("ms_token", "ec_private_key", "server_public_key", "ticket", "ts_sign_raw", "csrf_token"):
|
||||
v = getattr(keys, k, None)
|
||||
print(f" {k}: {'✓' if v else '✗'}")
|
||||
|
||||
global USER_ID
|
||||
async with httpx.AsyncClient(timeout=15.0) as c:
|
||||
resp = await c.get(
|
||||
USER_INFO_URL, headers={"Cookie": keys.cookie_str, "User-Agent": UA}
|
||||
@@ -602,7 +615,10 @@ async def main():
|
||||
if data.get("status_code") != 0:
|
||||
print(f"[✗] Cookie 无效: {data}")
|
||||
return 1
|
||||
print(f"[✓] 用户: {data.get('user_info', {}).get('nickname', 'unknown')}\n")
|
||||
user = data.get("user") or data.get("user_info") or {}
|
||||
nickname = user.get("nickname", "unknown")
|
||||
USER_ID = str(user.get("uid", "") or user.get("user_id", ""))
|
||||
print(f"[✓] 用户: {nickname} (uid={USER_ID})\n")
|
||||
|
||||
videos = sorted(VIDEO_DIR.glob("*.mp4"))
|
||||
if not videos:
|
||||
@@ -623,11 +639,12 @@ async def main():
|
||||
|
||||
results = []
|
||||
for i, (vp, title, ts) in enumerate(schedule):
|
||||
ok = await publish_one(keys, str(vp), title, ts, i + 1, len(schedule))
|
||||
ok = await publish_one(str(vp), title, ts, i + 1, len(schedule))
|
||||
results.append((vp.name, ok, ts))
|
||||
if i < len(schedule) - 1 and ok:
|
||||
print(" 等待 5s...")
|
||||
await asyncio.sleep(5)
|
||||
if i < len(schedule) - 1:
|
||||
wait = 3 if ok else 1
|
||||
print(f" 等待 {wait}s...")
|
||||
await asyncio.sleep(wait)
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(" 发布汇总")
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -1,40 +1,40 @@
|
||||
# 远志 · 一个月每日视频任务清单(模板)
|
||||
|
||||
> 源自远志安排:视频剪辑→切片→分发全网,目标 **每天 500 个视频**。
|
||||
> 源自远志安排:视频剪辑→切片→工具分发,目标 **每天 200 个视频**;工具研发:每天切 10-30 个。
|
||||
> SOP 见 `Soul竖屏切片_SKILL.md` 等;本清单按日拆解,便于执行与复盘。
|
||||
|
||||
---
|
||||
|
||||
## 每日固定动作(500 视频/日)
|
||||
## 每日固定动作(200 视频/日)
|
||||
|
||||
| 序号 | 动作 | 数量/时长 | 说明 |
|
||||
|------|------|-----------|------|
|
||||
| 1 | 剪辑母片 | 视素材而定 | 粗剪/精剪,输出可切片素材 |
|
||||
| 2 | 切片生成 | 目标 500 条/日 | Soul 竖屏/四屏等多尺寸 |
|
||||
| 3 | 分发上传 | 500 条 | 抖音/快手/视频号/小红书等全网平台 |
|
||||
| 4 | 数据记录 | 1 次 | 当日发布数、完播率等关键指标 |
|
||||
| 序号 | 动作 | 数量/说明 |
|
||||
|------|------|-----------|
|
||||
| 1 | 切片工具 | 每天切 10-30 个视频 |
|
||||
| 2 | 工具分发 | 200 条/日 → 各平台 |
|
||||
| 3 | 售内容产出 | 含内容生产,按年度目标 % 推进 |
|
||||
| 4 | 数据记录 | 当日发布数、完成度 % |
|
||||
|
||||
---
|
||||
|
||||
## 一周示例(按日)
|
||||
|
||||
| 日期 | 剪辑任务 | 切片目标 | 分发平台 | 完成度 |
|
||||
|------|----------|----------|----------|--------|
|
||||
| 周一 | 母片 X 条 | 500 | 全平台 | X% |
|
||||
| 周二 | 母片 X 条 | 500 | 全平台 | X% |
|
||||
| 周三 | 母片 X 条 | 500 | 全平台 | X% |
|
||||
| 周四 | 母片 X 条 | 500 | 全平台 | X% |
|
||||
| 周五 | 母片 X 条 | 500 | 全平台 | X% |
|
||||
| 周六 | 母片 X 条 | 500 | 全平台 | X% |
|
||||
| 周日 | 母片 X 条 | 500 | 全平台 | X% |
|
||||
| 日期 | 切片工具 | 分发目标 | 售内容 | 完成度 |
|
||||
|------|----------|----------|--------|--------|
|
||||
| 周一 | 10-30 条 | 200 | 产出中 | X% |
|
||||
| 周二 | 10-30 条 | 200 | 产出中 | X% |
|
||||
| 周三 | 10-30 条 | 200 | 产出中 | X% |
|
||||
| 周四 | 10-30 条 | 200 | 产出中 | X% |
|
||||
| 周五 | 10-30 条 | 200 | 产出中 | X% |
|
||||
| 周六 | 10-30 条 | 200 | 产出中 | X% |
|
||||
| 周日 | 10-30 条 | 200 | 产出中 | X% |
|
||||
|
||||
---
|
||||
|
||||
## 一个月节奏建议
|
||||
|
||||
- **第 1 周**:SOP 跑通、工具链稳定,目标 300/日
|
||||
- **第 2 周**:提速至 450/日
|
||||
- **第 3~4 周**:稳定 500/日,优化爆款率
|
||||
- **第 1 周**:工具研发跑通,目标 100/日
|
||||
- **第 2 周**:工具稳定,目标 150/日
|
||||
- **第 3~4 周**:稳定 200/日,售内容与年度目标 % 对齐
|
||||
|
||||
---
|
||||
|
||||
|
||||
85
03_卡木(木)/木叶_视频内容/视频号发布/SKILL.md
Normal file
85
03_卡木(木)/木叶_视频内容/视频号发布/SKILL.md
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
name: 视频号发布
|
||||
description: >
|
||||
纯 API 命令行方式发布视频到微信视频号(不打开浏览器)。通过逆向视频号助手的 finder-assistant
|
||||
腾讯云上传接口,实现 Cookie 认证 → applyuploaddfs → uploadpartdfs → completepartuploaddfs → 发布的完整链路。
|
||||
triggers: 视频号发布、发布到视频号、视频号登录、视频号上传、微信视频号
|
||||
owner: 木叶
|
||||
group: 木
|
||||
version: "1.0"
|
||||
updated: "2026-03-10"
|
||||
---
|
||||
|
||||
# 视频号发布 Skill(v1.0)
|
||||
|
||||
> **核心能力**:纯 Python 命令行,通过逆向视频号助手(finder-assistant)的腾讯云上传接口实现视频上传与发布。
|
||||
> **认证方式**:Playwright 微信扫码登录获取 Cookie,之后全程 API 操作。
|
||||
> **API 来源**:推兔(TuiTool)逆向分析,server.min.bin 中明确使用 finder-assistant 系列接口。
|
||||
|
||||
---
|
||||
|
||||
## 一、纯 API 完整流程(4 步)
|
||||
|
||||
```
|
||||
[Step 1] Cookie 认证
|
||||
Playwright 微信扫码 → channels_storage_state.json
|
||||
登录地址: https://channels.weixin.qq.com/login
|
||||
|
||||
[Step 2] 申请上传 (applyuploaddfs)
|
||||
POST finder-assistant.mp.video.tencent-cloud.com/applyuploaddfs
|
||||
参数: fileName, fileSize, fileType
|
||||
返回: UploadID(分片标识)
|
||||
|
||||
[Step 3] 分片上传 (uploadpartdfs)
|
||||
POST /uploadpartdfs?PartNumber=N&UploadID=xxx
|
||||
body: 视频二进制分片
|
||||
|
||||
[Step 4] 完成上传 + 发布
|
||||
POST /completepartuploaddfs?UploadID=xxx
|
||||
POST /cgi-bin/mmfinderassistant-bin/helper/helper_video_publish
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 二、一键命令
|
||||
|
||||
```bash
|
||||
cd /Users/karuo/Documents/个人/卡若AI/03_卡木(木)/木叶_视频内容/视频号发布/脚本
|
||||
|
||||
# 1. 首次或 Cookie 过期:微信扫码登录
|
||||
python3 channels_login.py
|
||||
|
||||
# 2. 批量发布
|
||||
python3 channels_publish.py
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 三、Cookie 有效期
|
||||
|
||||
| Cookie | 有效期 | 说明 |
|
||||
|--------|--------|------|
|
||||
| 视频号助手 session | ~24-48h | 过期需重新微信扫码 |
|
||||
|
||||
视频号 Cookie 有效期较短,建议每天使用前检查。
|
||||
|
||||
---
|
||||
|
||||
## 四、推兔实现参考
|
||||
|
||||
推兔 server.min.bin 中的视频号上传链路:
|
||||
- `applyuploaddfs`(申请上传)
|
||||
- `uploadpartdfs?PartNumber=&UploadID=`(分片)
|
||||
- `completepartuploaddfs?UploadID=`(完成)
|
||||
|
||||
与官方"视频号助手"上传链路一致,属于腾讯内部接口。
|
||||
|
||||
---
|
||||
|
||||
## 五、相关文件
|
||||
|
||||
| 文件 | 说明 |
|
||||
|------|------|
|
||||
| `脚本/channels_publish.py` | **主脚本**:纯 API 视频上传+发布 |
|
||||
| `脚本/channels_login.py` | Playwright 微信扫码登录 |
|
||||
| `脚本/channels_storage_state.json` | Cookie 存储(生成后自动创建) |
|
||||
45
03_卡木(木)/木叶_视频内容/视频号发布/脚本/channels_login.py
Normal file
45
03_卡木(木)/木叶_视频内容/视频号发布/脚本/channels_login.py
Normal file
@@ -0,0 +1,45 @@
|
||||
#!/usr/bin/env python3
|
||||
"""视频号 Cookie 获取 - Playwright 扫码登录 → 保存 storage_state"""
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
from playwright.async_api import async_playwright
|
||||
|
||||
COOKIE_FILE = Path(__file__).parent / "channels_storage_state.json"
|
||||
LOGIN_URL = "https://channels.weixin.qq.com/login"
|
||||
|
||||
UA = (
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36"
|
||||
)
|
||||
|
||||
|
||||
async def main():
|
||||
print("即将弹出浏览器,请用微信扫码登录视频号助手。")
|
||||
print("登录成功后(看到视频号助手主页),按 Enter 或在 Inspector 点绿色 ▶。\n")
|
||||
|
||||
async with async_playwright() as pw:
|
||||
browser = await pw.chromium.launch(headless=False)
|
||||
context = await browser.new_context(user_agent=UA, viewport={"width": 1280, "height": 720})
|
||||
await context.add_init_script("Object.defineProperty(navigator,'webdriver',{get:()=>undefined})")
|
||||
page = await context.new_page()
|
||||
await page.goto(LOGIN_URL, timeout=60000)
|
||||
|
||||
print("等待微信扫码登录...")
|
||||
try:
|
||||
await page.wait_for_url("**/platform**", timeout=180000)
|
||||
await asyncio.sleep(3)
|
||||
except Exception:
|
||||
print("未自动检测到跳转,请手动确认已登录后按 Enter")
|
||||
await page.pause()
|
||||
|
||||
await context.storage_state(path=str(COOKIE_FILE))
|
||||
await context.close()
|
||||
await browser.close()
|
||||
|
||||
print(f"\n[✓] 视频号 Cookie 已保存: {COOKIE_FILE}")
|
||||
print(f" 文件大小: {COOKIE_FILE.stat().st_size} bytes")
|
||||
print("现在可运行 channels_publish.py 批量发布。")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
284
03_卡木(木)/木叶_视频内容/视频号发布/脚本/channels_publish.py
Normal file
284
03_卡木(木)/木叶_视频内容/视频号发布/脚本/channels_publish.py
Normal file
@@ -0,0 +1,284 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
视频号纯 API 视频发布(无浏览器)
|
||||
基于推兔逆向分析: finder-assistant 腾讯云上传接口
|
||||
|
||||
流程:
|
||||
1. 从 storage_state.json 加载 cookies
|
||||
2. POST applyuploaddfs → 获取上传参数(UploadID、分片信息)
|
||||
3. POST uploadpartdfs → 分片上传
|
||||
4. POST completepartuploaddfs → 完成上传
|
||||
5. POST 发布/创建视频号动态
|
||||
"""
|
||||
import asyncio
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
import httpx
|
||||
|
||||
SCRIPT_DIR = Path(__file__).parent
|
||||
COOKIE_FILE = SCRIPT_DIR / "channels_storage_state.json"
|
||||
VIDEO_DIR = Path("/Users/karuo/Movies/soul视频/soul 派对 119场 20260309_output/成片")
|
||||
|
||||
sys.path.insert(0, str(SCRIPT_DIR.parent.parent / "多平台分发" / "脚本"))
|
||||
from cookie_manager import CookieManager
|
||||
from video_utils import extract_cover, extract_cover_bytes
|
||||
|
||||
FINDER_HOST = "https://finder-assistant.mp.video.tencent-cloud.com"
|
||||
CHANNELS_HOST = "https://channels.weixin.qq.com"
|
||||
CHUNK_SIZE = 3 * 1024 * 1024
|
||||
|
||||
UA = (
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36"
|
||||
)
|
||||
|
||||
TITLES = {
|
||||
"早起不是为了开派对,是不吵老婆睡觉.mp4":
|
||||
"每天6点起床不是因为自律,是因为老婆还在睡 #Soul派对 #创业日记",
|
||||
"懒人的活法 动作简单有利可图正反馈.mp4":
|
||||
"懒人也能赚钱?动作简单、有利可图、正反馈 #Soul派对 #副业思维",
|
||||
"初期团队先找两个IS,比钱好使 ENFJ链接人,ENTJ指挥.mp4":
|
||||
"创业初期先找两个IS型人格,比融资好使十倍 #MBTI创业 #团队搭建",
|
||||
"ICU出来一年多 活着要在互联网上留下东西.mp4":
|
||||
"ICU出来一年多,活着就要在互联网上留下东西 #人生感悟 #创业觉醒",
|
||||
"MBTI疗愈SOUL 年轻人测MBTI,40到60岁走五行八卦.mp4":
|
||||
"20岁测MBTI,40岁该学五行八卦了 #MBTI #认知觉醒",
|
||||
"Soul业务模型 派对+切片+小程序全链路.mp4":
|
||||
"派对获客→AI切片→小程序变现,全链路拆解 #商业模式 #一人公司",
|
||||
"Soul切片30秒到8分钟 AI半小时能剪10到30个.mp4":
|
||||
"AI剪辑半小时出10到30条切片,内容工厂效率密码 #AI剪辑 #内容效率",
|
||||
"刷牙听业务逻辑 Soul切片变现怎么跑.mp4":
|
||||
"刷牙3分钟听完一套变现逻辑 #碎片创业 #副业逻辑",
|
||||
"国学易经怎么学 两小时七七八八,召唤作者对话.mp4":
|
||||
"易经两小时学个七七八八,关键是跟古人对话 #国学 #易经入门",
|
||||
"广点通能投Soul了,1000曝光6到10块.mp4":
|
||||
"广点通能投Soul了!1000曝光只要6到10块 #广点通 #低成本获客",
|
||||
"建立信任不是求来的 卖外挂发邮件三个月拿下德国总代.mp4":
|
||||
"信任不是求来的,发三个月邮件拿下德国总代理 #销售思维 #信任建立",
|
||||
"核心就两个字 筛选。能开派对坚持7天的人再谈.mp4":
|
||||
"核心就两个字:筛选。能坚持7天的人才值得深聊 #筛选思维 #创业认知",
|
||||
"睡眠不好?每天放下一件事,做减法.mp4":
|
||||
"睡不好不是太累,是脑子装太多,每天做减法 #做减法 #心理健康",
|
||||
"这套体系花了170万,但前端几十块就能参与.mp4":
|
||||
"后端花170万搭体系,前端几十块就能参与 #商业认知 #体系思维",
|
||||
"金融AI获客体系 后端30人沉淀12年,前端丢手机.mp4":
|
||||
"后端30人沉淀12年,前端就丢个手机号 #AI获客 #系统思维",
|
||||
}
|
||||
|
||||
|
||||
def _build_headers(cookies: CookieManager) -> dict:
|
||||
return {
|
||||
"Cookie": cookies.cookie_str,
|
||||
"User-Agent": UA,
|
||||
"Referer": "https://channels.weixin.qq.com/",
|
||||
"Origin": "https://channels.weixin.qq.com",
|
||||
}
|
||||
|
||||
|
||||
async def check_login(client: httpx.AsyncClient, cookies: CookieManager) -> dict:
|
||||
"""检查登录状态"""
|
||||
url = f"{CHANNELS_HOST}/cgi-bin/mmfinderassistant-bin/helper/helper_upload_params"
|
||||
resp = await client.post(url, headers=_build_headers(cookies), json={})
|
||||
try:
|
||||
data = resp.json()
|
||||
if data.get("base_resp", {}).get("ret") == 0:
|
||||
return data
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
url2 = f"{CHANNELS_HOST}/cgi-bin/mmfinderassistant-bin/helper/helper_search_finder"
|
||||
resp2 = await client.post(url2, headers=_build_headers(cookies), json={"query": ""})
|
||||
try:
|
||||
data2 = resp2.json()
|
||||
return data2 if data2.get("base_resp", {}).get("ret") == 0 else {}
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
async def apply_upload(
|
||||
client: httpx.AsyncClient, cookies: CookieManager,
|
||||
filename: str, filesize: int, filetype: str = "video"
|
||||
) -> dict:
|
||||
"""申请上传 DFS"""
|
||||
print(" [1] 申请上传...")
|
||||
url = f"{FINDER_HOST}/applyuploaddfs"
|
||||
body = {
|
||||
"fileName": filename,
|
||||
"fileSize": filesize,
|
||||
"fileType": filetype,
|
||||
}
|
||||
resp = await client.post(url, json=body, headers=_build_headers(cookies), timeout=30.0)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
if data.get("ret") != 0 and data.get("code") != 0 and "UploadID" not in str(data):
|
||||
raise RuntimeError(f"applyuploaddfs 失败: {data}")
|
||||
upload_id = data.get("UploadID", data.get("uploadId", ""))
|
||||
print(f" UploadID={upload_id[:30] if upload_id else 'N/A'}...")
|
||||
return data
|
||||
|
||||
|
||||
async def upload_parts(
|
||||
client: httpx.AsyncClient, cookies: CookieManager,
|
||||
upload_id: str, file_path: str
|
||||
) -> bool:
|
||||
"""分片上传"""
|
||||
print(" [2] 分片上传...")
|
||||
raw = Path(file_path).read_bytes()
|
||||
total = len(raw)
|
||||
n_chunks = (total + CHUNK_SIZE - 1) // CHUNK_SIZE
|
||||
|
||||
for i in range(n_chunks):
|
||||
start = i * CHUNK_SIZE
|
||||
end = min(start + CHUNK_SIZE, total)
|
||||
chunk = raw[start:end]
|
||||
|
||||
url = f"{FINDER_HOST}/uploadpartdfs?PartNumber={i+1}&UploadID={upload_id}"
|
||||
resp = await client.post(
|
||||
url,
|
||||
content=chunk,
|
||||
headers={
|
||||
**_build_headers(cookies),
|
||||
"Content-Type": "application/octet-stream",
|
||||
},
|
||||
timeout=120.0,
|
||||
)
|
||||
if resp.status_code not in (200, 204):
|
||||
print(f" chunk {i+1}/{n_chunks} 失败: {resp.status_code} {resp.text[:200]}")
|
||||
return False
|
||||
print(f" chunk {i+1}/{n_chunks} ok ({len(chunk)/1024:.0f}KB)")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
async def complete_upload(
|
||||
client: httpx.AsyncClient, cookies: CookieManager, upload_id: str
|
||||
) -> dict:
|
||||
"""完成上传"""
|
||||
print(" [3] 完成上传...")
|
||||
url = f"{FINDER_HOST}/completepartuploaddfs?UploadID={upload_id}"
|
||||
resp = await client.post(url, headers=_build_headers(cookies), json={}, timeout=30.0)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
print(f" 完成: {json.dumps(data, ensure_ascii=False)[:200]}")
|
||||
return data
|
||||
|
||||
|
||||
async def publish_post(
|
||||
client: httpx.AsyncClient, cookies: CookieManager,
|
||||
title: str, media_id: str = "", file_key: str = "",
|
||||
cover_url: str = "",
|
||||
) -> dict:
|
||||
"""发布视频号动态"""
|
||||
print(" [4] 发布动态...")
|
||||
url = f"{CHANNELS_HOST}/cgi-bin/mmfinderassistant-bin/helper/helper_video_publish"
|
||||
|
||||
body = {
|
||||
"postDesc": title,
|
||||
"mediaList": [{
|
||||
"mediaType": 9,
|
||||
"mediaId": media_id,
|
||||
"fileKey": file_key,
|
||||
}],
|
||||
}
|
||||
if cover_url:
|
||||
body["coverUrl"] = cover_url
|
||||
|
||||
resp = await client.post(url, json=body, headers=_build_headers(cookies), timeout=30.0)
|
||||
data = resp.json() if resp.status_code == 200 else {}
|
||||
print(f" 响应: {json.dumps(data, ensure_ascii=False)[:300]}")
|
||||
return data
|
||||
|
||||
|
||||
async def publish_one(video_path: str, title: str, idx: int = 1, total: int = 1) -> bool:
|
||||
fname = Path(video_path).name
|
||||
fsize = Path(video_path).stat().st_size
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f" [{idx}/{total}] {fname}")
|
||||
print(f" 大小: {fsize/1024/1024:.1f}MB")
|
||||
print(f" 标题: {title[:60]}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
try:
|
||||
cookies = CookieManager(COOKIE_FILE, "weixin.qq.com")
|
||||
if not cookies.is_valid():
|
||||
print(" [✗] Cookie 已过期,请重新运行 channels_login.py")
|
||||
return False
|
||||
|
||||
async with httpx.AsyncClient(timeout=60.0, follow_redirects=True) as client:
|
||||
login_check = await check_login(client, cookies)
|
||||
if not login_check:
|
||||
print(" [✗] Cookie 无效,请重新登录")
|
||||
return False
|
||||
|
||||
apply_data = await apply_upload(client, cookies, fname, fsize)
|
||||
upload_id = apply_data.get("UploadID", apply_data.get("uploadId", ""))
|
||||
if not upload_id:
|
||||
print(" [✗] 未获取到 UploadID")
|
||||
return False
|
||||
|
||||
if not await upload_parts(client, cookies, upload_id, video_path):
|
||||
print(" [✗] 上传失败")
|
||||
return False
|
||||
|
||||
complete_data = await complete_upload(client, cookies, upload_id)
|
||||
media_id = complete_data.get("mediaId", complete_data.get("media_id", ""))
|
||||
file_key = complete_data.get("fileKey", complete_data.get("file_key", upload_id))
|
||||
|
||||
result = await publish_post(client, cookies, title, media_id, file_key)
|
||||
|
||||
ret = result.get("base_resp", {}).get("ret", -1)
|
||||
if ret == 0:
|
||||
print(f" [✓] 发布成功!")
|
||||
return True
|
||||
else:
|
||||
print(f" [✗] 发布失败: ret={ret}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print(f" [✗] 异常: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
|
||||
async def main():
|
||||
if not COOKIE_FILE.exists():
|
||||
print("[✗] Cookie 不存在,请先运行 channels_login.py")
|
||||
return 1
|
||||
|
||||
cookies = CookieManager(COOKIE_FILE, "weixin.qq.com")
|
||||
expiry = cookies.check_expiry()
|
||||
print(f"[i] Cookie 状态: {expiry['message']}")
|
||||
|
||||
videos = sorted(VIDEO_DIR.glob("*.mp4"))
|
||||
if not videos:
|
||||
print("[✗] 未找到视频")
|
||||
return 1
|
||||
print(f"[i] 共 {len(videos)} 条视频\n")
|
||||
|
||||
results = []
|
||||
for i, vp in enumerate(videos):
|
||||
title = TITLES.get(vp.name, f"{vp.stem} #Soul派对 #创业日记")
|
||||
ok = await publish_one(str(vp), title, i + 1, len(videos))
|
||||
results.append((vp.name, ok))
|
||||
if i < len(videos) - 1:
|
||||
await asyncio.sleep(5)
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(" 视频号发布汇总")
|
||||
print(f"{'='*60}")
|
||||
for name, ok in results:
|
||||
print(f" [{'✓' if ok else '✗'}] {name}")
|
||||
success = sum(1 for _, ok in results if ok)
|
||||
print(f"\n 成功: {success}/{len(results)}")
|
||||
return 0 if success == len(results) else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(asyncio.run(main()))
|
||||
@@ -1,7 +1,7 @@
|
||||
# 卡若AI 技能注册表(Skill Registry)
|
||||
|
||||
> **一张表查所有技能**。任何 AI 拿到这张表,就能按关键词找到对应技能的 SKILL.md 路径并执行。
|
||||
> 65 技能 | 14 成员 | 5 负责人
|
||||
> 69 技能 | 14 成员 | 5 负责人
|
||||
> 版本:5.4 | 更新:2026-03-01
|
||||
>
|
||||
> **技能配置、安装、删除、掌管人登记** → 见 **`运营中枢/工作台/01_技能控制台.md`**。
|
||||
@@ -89,6 +89,7 @@
|
||||
| W09 | 小程序管理 | 水桥 | 小程序、微信小程序 | `02_卡人(水)/水桥_平台对接/小程序管理/SKILL.md` | 微信小程序发布与维护 |
|
||||
| W10 | Soul创业实验 | 水桥 | **Soul创业实验、写Soul文章、写授文章、Soul派对写文章、第9章写文章、写soul场次、soul文章规则、Soul文章上传、第9章上传、soul上传、写soul文章、运营报表、派对填表、派对纪要** | `02_卡人(水)/水桥_平台对接/Soul创业实验/SKILL.md` | 写作+上传+运营报表统一入口;第9章规范与小程序上传见本 Skill 子类 |
|
||||
| W11 | Soul派对运营报表 | 水桥 | **运营报表、派对填表、派对截图填表发群、派对纪要、智能纪要、106场、107场、本月运营数据** | `02_卡人(水)/水桥_平台对接/飞书管理/运营报表_SKILL.md` | 派对截图+TXT→飞书运营报表→智能纪要→飞书群推送,含Token自刷新与写入校验 |
|
||||
| W11a | Soul发到素材库 | 水桥 | **Soul发到素材库、成片发飞书、切片发飞书、视频分发飞书、发到素材库** | `02_卡人(水)/水桥_平台对接/飞书管理/Soul发到素材库_SKILL.md` | 成片→飞书内容看板,含附件+多平台描述,可打包基因胶囊 |
|
||||
| W12 | MCP 搜索与连接 | 水桥 | **MCP、找MCP、连接MCP、MCP搜索、发现MCP、添加MCP、需要MCP、MCP安装、MCP发现、查MCP、装MCP** | `02_卡人(水)/水桥_平台对接/MCP管理/SKILL.md` | 搜索 5000+ MCP 服务器→生成安装配置→写入 Cursor/Claude 等 |
|
||||
| W13 | Excel表格与日报 | 水桥 | **Excel写飞书、Excel导入飞书、批量写飞书表格、飞书表格导入、CSV写飞书、日报图表发飞书、表格日报** | `02_卡人(水)/水桥_平台对接/飞书管理/Excel表格与日报_SKILL.md` | 本地 Excel/CSV→飞书表格→自动日报图表→发飞书群 |
|
||||
| W14 | **卡猫复盘** | 水桥 | **卡猫复盘、婼瑄复盘、卡猫今日复盘、婼瑄今日、复盘到卡猫、发卡猫群** | `02_卡人(水)/水桥_平台对接/飞书管理/卡猫复盘/SKILL.md` | 婼瑄目录→目标=今年总目标+完成%+人/事/数具体→飞书+卡猫群 |
|
||||
@@ -100,7 +101,12 @@
|
||||
|:--|:---|:---|:---|:---|:---|
|
||||
| M01 | 视频切片 | 木叶 | **视频剪辑、切片发布、切片动效包装、程序化包装、片头片尾、批量封面、视频包装** | `03_卡木(木)/木叶_视频内容/视频切片/SKILL.md` | 长视频切片+字幕+发布;联动切片动效包装(片头/片尾/程序化) |
|
||||
| M01b | 抖音视频解析 | 木叶 | **抖音视频、抖音链接、抖音解析、抖音下载、提取抖音文案、抖音无水印** | `03_卡木(木)/木叶_视频内容/抖音视频解析/SKILL.md` | 链接→解析ID→提取文案→下载无水印视频 |
|
||||
| M01c | 抖音发布 | 木叶 | **抖音发布、发布到抖音、抖音登录、抖音上传、腕推抖音** | `03_卡木(木)/木叶_视频内容/抖音发布/SKILL.md` | 开放平台 OAuth 登录 + 上传/创建视频发布;可对接腕推/存客宝 |
|
||||
| M01c | 抖音发布 | 木叶 | **抖音发布、发布到抖音、抖音登录、抖音上传、腕推抖音** | `03_卡木(木)/木叶_视频内容/抖音发布/SKILL.md` | 纯 API 视频上传+发布(VOD + bd-ticket-guard),无需浏览器 |
|
||||
| M01d | B站发布 | 木叶 | **B站发布、发布到B站、B站登录、B站上传、bilibili发布** | `03_卡木(木)/木叶_视频内容/B站发布/SKILL.md` | 纯 API(preupload 分片),Cookie 有效期约6个月 |
|
||||
| M01e | 视频号发布 | 木叶 | **视频号发布、发布到视频号、视频号登录、视频号上传、微信视频号** | `03_卡木(木)/木叶_视频内容/视频号发布/SKILL.md` | 纯 API(finder-assistant 腾讯云上传),微信扫码登录 |
|
||||
| M01f | 小红书发布 | 木叶 | **小红书发布、发布到小红书、小红书登录、小红书上传、RED发布** | `03_卡木(木)/木叶_视频内容/小红书发布/SKILL.md` | 逆向 creator API 视频笔记发布,封面取第一帧 |
|
||||
| M01g | 快手发布 | 木叶 | **快手发布、发布到快手、快手登录、快手上传、kuaishou发布** | `03_卡木(木)/木叶_视频内容/快手发布/SKILL.md` | 逆向 cp.kuaishou.com API 视频发布 |
|
||||
| M01h | 多平台分发 | 木叶 | **多平台分发、一键分发、全平台发布、批量分发、视频分发** | `03_卡木(木)/木叶_视频内容/多平台分发/SKILL.md` | 一键分发到5平台(抖音/B站/视频号/小红书/快手),Cookie统一管理 |
|
||||
| M02 | 网站逆向分析 | 木根 | 逆向分析、模拟登录 | `03_卡木(木)/木根_逆向分析/网站逆向分析/SKILL.md` | 网站 API 分析、SDK 生成 |
|
||||
| M03 | 项目生成 | 木果 | 生成项目、五行模板 | `03_卡木(木)/木果_项目模板/项目生成/SKILL.md` | 按五行模板生成新项目 |
|
||||
| M04 | 开发模板 | 木果 | 创建项目、初始化模板 | `03_卡木(木)/木果_项目模板/开发模板/SKILL.md` | 前后端项目模板库 |
|
||||
@@ -164,7 +170,7 @@
|
||||
|:--|:---|:--|:--|
|
||||
| 金 | 卡资 | 2 | 21 |
|
||||
| 水 | 卡人 | 3 | 13 |
|
||||
| 木 | 卡木 | 3 | 8 |
|
||||
| 木 | 卡木 | 3 | 13 |
|
||||
| 火 | 卡火 | 4 | 15 |
|
||||
| 土 | 卡土 | 4 | 7 |
|
||||
| **合计** | **5** | **14** | **64** |
|
||||
| **合计** | **5** | **14** | **69** |
|
||||
|
||||
@@ -256,3 +256,4 @@
|
||||
| 2026-03-08 10:53:09 | 🔄 卡若AI 同步 2026-03-08 10:53 | 更新:卡土、总索引与入口、运营中枢工作台 | 排除 >20MB: 11 个 |
|
||||
| 2026-03-09 05:51:31 | 🔄 卡若AI 同步 2026-03-09 05:51 | 更新:金仓、水桥平台对接、卡木、运营中枢工作台 | 排除 >20MB: 11 个 |
|
||||
| 2026-03-09 22:16:33 | 🔄 卡若AI 同步 2026-03-09 22:16 | 更新:水桥平台对接、水溪整理归档、卡木、运营中枢工作台 | 排除 >20MB: 11 个 |
|
||||
| 2026-03-09 22:23:01 | 🔄 卡若AI 同步 2026-03-09 22:22 | 更新:卡木、运营中枢工作台 | 排除 >20MB: 11 个 |
|
||||
|
||||
@@ -259,3 +259,4 @@
|
||||
| 2026-03-08 10:53:09 | 成功 | 成功 | 🔄 卡若AI 同步 2026-03-08 10:53 | 更新:卡土、总索引与入口、运营中枢工作台 | 排除 >20MB: 11 个 | [仓库](http://open.quwanzhi.com:3000/fnvtk/karuo-ai) [百科](http://open.quwanzhi.com:3000/fnvtk/karuo-ai/wiki) |
|
||||
| 2026-03-09 05:51:31 | 成功 | 成功 | 🔄 卡若AI 同步 2026-03-09 05:51 | 更新:金仓、水桥平台对接、卡木、运营中枢工作台 | 排除 >20MB: 11 个 | [仓库](http://open.quwanzhi.com:3000/fnvtk/karuo-ai) [百科](http://open.quwanzhi.com:3000/fnvtk/karuo-ai/wiki) |
|
||||
| 2026-03-09 22:16:33 | 成功 | 成功 | 🔄 卡若AI 同步 2026-03-09 22:16 | 更新:水桥平台对接、水溪整理归档、卡木、运营中枢工作台 | 排除 >20MB: 11 个 | [仓库](http://open.quwanzhi.com:3000/fnvtk/karuo-ai) [百科](http://open.quwanzhi.com:3000/fnvtk/karuo-ai/wiki) |
|
||||
| 2026-03-09 22:23:01 | 成功 | 成功 | 🔄 卡若AI 同步 2026-03-09 22:22 | 更新:卡木、运营中枢工作台 | 排除 >20MB: 11 个 | [仓库](http://open.quwanzhi.com:3000/fnvtk/karuo-ai) [百科](http://open.quwanzhi.com:3000/fnvtk/karuo-ai/wiki) |
|
||||
|
||||
Reference in New Issue
Block a user