🔄 卡若AI 同步 2026-03-12 23:20 | 更新:Cursor规则、水桥平台对接、卡木、总索引与入口、运营中枢工作台 | 排除 >20MB: 11 个
This commit is contained in:
@@ -67,6 +67,12 @@ alwaysApply: true
|
||||
### 复盘(所有对话强制,卡若AI 统一回复形式)
|
||||
- **卡若AI 内所有对话:AI 的回复一律用复盘形式。** 复盘格式**永远只有一种**:完整复盘;**复盘块内不用表格**;每块用**小图标识别**(🎯📌💡📝▶);标题必须写**具体日期 + 具体时间**(真实 YYYY-MM-DD HH:mm);**能加颜色则加**(日期时间、各块标题)。详见 `运营中枢/参考资料/卡若复盘格式_固定规则.md`。
|
||||
|
||||
### 飞书复盘总结发群(对话结束后强制)
|
||||
- **每次对话完成、复盘写完后**,将**简洁复盘总结**(建议 ≤500 字:日期时间、目标·结果·达成率、完成的任务要点、下一步)发到飞书群。
|
||||
- **长对话尤其必须发**;每次完成的任务都发。
|
||||
- **执行**:`python3 "02_卡人(水)/水桥_平台对接/飞书管理/脚本/send_review_to_feishu_webhook.py" "【卡若AI复盘】YYYY-MM-DD HH:mm\n🎯 …\n📌 …\n▶ …"`(将复盘内容精简为一段文本传入)。
|
||||
- 说明:`运营中枢/工作台/飞书复盘总结发群说明.md`。
|
||||
|
||||
### 终端命令与常规操作(强制:需终端的一律直接执行)
|
||||
- **需在终端执行的,一律由 AI 直接执行,不把命令交给用户自己跑。** 包括但不限于:下载/同步脚本(如从 GitHub 拉最新)、部署脚本、构建、同步 Gitea、运行 SKILL 内脚本等。不输出「请你在终端执行」「请运行以下命令」让用户自己复制执行。
|
||||
- **终端命令**:直接执行不询问,50 字内说明后执行。
|
||||
|
||||
60
02_卡人(水)/水桥_平台对接/飞书管理/复盘总结发飞书群_SKILL.md
Normal file
60
02_卡人(水)/水桥_平台对接/飞书管理/复盘总结发飞书群_SKILL.md
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
name: 复盘总结发飞书群
|
||||
description: 每次对话完成后,将简洁复盘总结发到指定飞书群(webhook);长对话必发,每次完成的任务都发。卡若AI 强制规则之一。
|
||||
triggers: 复盘发飞书、飞书复盘、对话总结发群、复盘总结发群、FEISHU_REVIEW_WEBHOOK
|
||||
owner: 水桥
|
||||
group: 水
|
||||
version: "1.0"
|
||||
updated: "2026-03-12"
|
||||
---
|
||||
|
||||
# 复盘总结发飞书群(水桥)
|
||||
|
||||
> 对话结束、复盘写完后,将简洁复盘总结发到飞书群。**每次对话都发,长对话必发。**
|
||||
|
||||
---
|
||||
|
||||
## 能做什么
|
||||
|
||||
- 将**简洁复盘总结**(≤500 字)通过飞书 bot v2 webhook 推到指定群。
|
||||
- 内容建议:日期时间、目标·结果·达成率、完成的任务要点、下一步(或无)。
|
||||
|
||||
---
|
||||
|
||||
## 规则(卡若AI 强制)
|
||||
|
||||
- **时机**:每次对话完成、完成「卡若复盘」后。
|
||||
- **频率**:每次对话完成都发;**长对话尤其必须发**。
|
||||
- **内容**:精简版复盘(不必五块全写,保留:时间、目标·结果·达成率、完成要点、下一步)。
|
||||
|
||||
---
|
||||
|
||||
## 脚本与用法
|
||||
|
||||
| 项 | 说明 |
|
||||
|:---|:---|
|
||||
| **脚本** | `02_卡人(水)/水桥_平台对接/飞书管理/脚本/send_review_to_feishu_webhook.py` |
|
||||
| **默认 webhook** | 脚本内已配置;可用环境变量 `FEISHU_REVIEW_WEBHOOK` 覆盖。 |
|
||||
|
||||
```bash
|
||||
# 直接传入简洁总结
|
||||
python3 "02_卡人(水)/水桥_平台对接/飞书管理/脚本/send_review_to_feishu_webhook.py" "【卡若AI复盘】2026-03-12 15:30
|
||||
🎯 完成记忆系统使用手册与流程图
|
||||
📌 已写开发文档/9、手册/卡若AI记忆系统使用手册.md;流程图已存 9、手册/images/
|
||||
▶ 无"
|
||||
|
||||
# 从文件读
|
||||
python3 .../send_review_to_feishu_webhook.py --file /path/to/summary.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 与其他规则的关系
|
||||
|
||||
- **Cursor 规则**:`.cursor/rules/karuo-ai.mdc` 中「飞书复盘总结发群」— 对话结束后执行发送。
|
||||
- **对话沉淀与优化规则**:`运营中枢/使用手册/对话沉淀与优化规则.md` 二、2.1 飞书复盘总结发群(必做项)。
|
||||
- **工作台说明**:`运营中枢/工作台/飞书复盘总结发群说明.md`。
|
||||
|
||||
---
|
||||
|
||||
*每次更新本 Skill:若 webhook、脚本路径或规则有变更,请同步更新本文件与上述三处引用。*
|
||||
@@ -381,6 +381,51 @@ def _col_letter(n: int) -> str:
|
||||
return s
|
||||
|
||||
|
||||
def _cell_px_width(text: str) -> int:
|
||||
"""估算单元格内容渲染宽度(px):中文≈20px/字,ASCII≈9px/字,加内边距 24px。"""
|
||||
w = 0
|
||||
for c in text:
|
||||
cp = ord(c)
|
||||
if (0x4E00 <= cp <= 0x9FFF or 0x3000 <= cp <= 0x303F or
|
||||
0xFF01 <= cp <= 0xFF60 or 0xFE30 <= cp <= 0xFE4F):
|
||||
w += 20
|
||||
else:
|
||||
w += 9
|
||||
return w + 24
|
||||
|
||||
|
||||
def _auto_resize_sheet_columns(
|
||||
headers: dict, spreadsheet_token: str, sheet_id: str, values: list[list[str]]
|
||||
) -> None:
|
||||
"""根据内容宽度自动设置每列的列宽(调用飞书 Sheets dimension_range API)。"""
|
||||
if not values or not spreadsheet_token or not sheet_id:
|
||||
return
|
||||
cols = max(len(r) for r in values) if values else 0
|
||||
url = (
|
||||
f"https://open.feishu.cn/open-apis/sheets/v2/spreadsheets"
|
||||
f"/{spreadsheet_token}/dimension_range"
|
||||
)
|
||||
for j in range(cols):
|
||||
max_w = max(
|
||||
(_cell_px_width(row[j]) for row in values if j < len(row)),
|
||||
default=80,
|
||||
)
|
||||
width = max(80, min(max_w, 400))
|
||||
payload = {
|
||||
"dimension": {
|
||||
"sheetId": sheet_id,
|
||||
"majorDimension": "COLUMNS",
|
||||
"startIndex": j,
|
||||
"endIndex": j + 1,
|
||||
},
|
||||
"dimensionProperties": {"pixelSize": width},
|
||||
}
|
||||
try:
|
||||
requests.put(url, headers=headers, json=payload, timeout=10)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def _fill_sheet_block_values(headers: dict, sheet_block_token: str, values: list[list[str]]) -> bool:
|
||||
if not sheet_block_token or "_" not in sheet_block_token or not values:
|
||||
return False
|
||||
@@ -402,6 +447,8 @@ def _fill_sheet_block_values(headers: dict, sheet_block_token: str, values: list
|
||||
if j.get("code") != 0:
|
||||
print(f"⚠️ 表格数据写入失败: {j.get('msg')} range={range_str}")
|
||||
return False
|
||||
# 写完数据后自动适配列宽
|
||||
_auto_resize_sheet_columns(headers, spreadsheet_token, sheet_id, values)
|
||||
return True
|
||||
|
||||
|
||||
|
||||
@@ -117,7 +117,7 @@ def _is_md_table_sep(line: str) -> bool:
|
||||
if s.endswith("|"):
|
||||
s = s[:-1]
|
||||
parts = [p.strip() for p in s.split("|")]
|
||||
return bool(parts) and all(re.match(r"^:?-{3,}:?$", p or "") for p in parts)
|
||||
return bool(parts) and all(re.match(r"^:?-{2,}:?$", p or "") for p in parts)
|
||||
|
||||
|
||||
def _clean_inline_markdown(text: str) -> str:
|
||||
|
||||
@@ -0,0 +1,255 @@
|
||||
"""
|
||||
WebPomodoro backend — macOS-native control via AppleScript + SQLite data access.
|
||||
"""
|
||||
import subprocess
|
||||
import sqlite3
|
||||
import os
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
|
||||
# ── Paths ──────────────────────────────────────────────────────────────────
|
||||
|
||||
CONTAINER = Path.home() / "Library/Containers/com.macpomodoro/Data"
|
||||
WEBKIT_BASE = CONTAINER / "Library/WebKit/WebsiteData/Default"
|
||||
PREFS_PLIST = CONTAINER / "Library/Preferences/com.macpomodoro.plist"
|
||||
|
||||
|
||||
def _find_webkit_dir() -> Optional[Path]:
|
||||
"""Find the hashed WebKit storage directory."""
|
||||
if not WEBKIT_BASE.exists():
|
||||
return None
|
||||
dirs = list(WEBKIT_BASE.iterdir())
|
||||
if dirs:
|
||||
return dirs[0] / dirs[0].name
|
||||
return None
|
||||
|
||||
|
||||
def _localstorage_db() -> Optional[Path]:
|
||||
d = _find_webkit_dir()
|
||||
if d:
|
||||
p = d / "LocalStorage/localstorage.sqlite3"
|
||||
if p.exists():
|
||||
return p
|
||||
return None
|
||||
|
||||
|
||||
def _indexeddb() -> Optional[Path]:
|
||||
d = _find_webkit_dir()
|
||||
if d:
|
||||
idb_dir = d / "IndexedDB"
|
||||
if idb_dir.exists():
|
||||
dbs = list(idb_dir.iterdir())
|
||||
if dbs:
|
||||
return dbs[0] / "IndexedDB.sqlite3"
|
||||
return None
|
||||
|
||||
|
||||
# ── App control ────────────────────────────────────────────────────────────
|
||||
|
||||
def _run_applescript(script: str) -> str:
|
||||
result = subprocess.run(
|
||||
["osascript", "-e", script],
|
||||
capture_output=True, text=True, timeout=10
|
||||
)
|
||||
if result.returncode != 0:
|
||||
raise RuntimeError(f"AppleScript error: {result.stderr.strip()}")
|
||||
return result.stdout.strip()
|
||||
|
||||
|
||||
def is_running() -> bool:
|
||||
"""Check if WebPomodoro is running."""
|
||||
script = 'tell application "System Events" to return (name of processes) contains "WebPomodoro"'
|
||||
return _run_applescript(script) == "true"
|
||||
|
||||
|
||||
def launch() -> None:
|
||||
"""Launch WebPomodoro if not running."""
|
||||
subprocess.run(["open", "-a", "WebPomodoro"], check=True)
|
||||
|
||||
|
||||
def get_timer_label() -> str:
|
||||
"""Read current timer display from status bar (e.g. '24:30')."""
|
||||
script = '''
|
||||
tell application "System Events"
|
||||
tell process "WebPomodoro"
|
||||
return name of menu bar item 1 of menu bar 2
|
||||
end tell
|
||||
end tell'''
|
||||
try:
|
||||
return _run_applescript(script)
|
||||
except Exception:
|
||||
return "unknown"
|
||||
|
||||
|
||||
def click_status_bar() -> None:
|
||||
"""Click the status bar item to open the timer menu."""
|
||||
script = '''
|
||||
tell application "System Events"
|
||||
tell process "WebPomodoro"
|
||||
click menu bar item 1 of menu bar 2
|
||||
end tell
|
||||
end tell'''
|
||||
_run_applescript(script)
|
||||
|
||||
|
||||
def activate_app() -> None:
|
||||
_run_applescript('tell application "WebPomodoro" to activate')
|
||||
|
||||
|
||||
# ── LocalStorage reader ────────────────────────────────────────────────────
|
||||
|
||||
def _decode_utf16(b) -> str:
|
||||
if isinstance(b, bytes):
|
||||
return b.decode("utf-16-le", errors="replace")
|
||||
return str(b)
|
||||
|
||||
|
||||
def read_localstorage() -> dict:
|
||||
"""Read all LocalStorage key-value pairs."""
|
||||
db_path = _localstorage_db()
|
||||
if not db_path:
|
||||
return {}
|
||||
conn = sqlite3.connect(str(db_path))
|
||||
c = conn.cursor()
|
||||
c.execute("SELECT key, value FROM ItemTable")
|
||||
result = {}
|
||||
for key, val in c.fetchall():
|
||||
try:
|
||||
k = _decode_utf16(key)
|
||||
v = _decode_utf16(val)
|
||||
if not v.startswith("data:image"): # skip base64 images
|
||||
result[k] = v
|
||||
except Exception:
|
||||
pass
|
||||
conn.close()
|
||||
return result
|
||||
|
||||
|
||||
def get_timer_state() -> dict:
|
||||
"""
|
||||
Return timer state dict:
|
||||
label, timingTaskId, timingSubtaskId, goals, syncTimestamp
|
||||
"""
|
||||
ls = read_localstorage()
|
||||
label = get_timer_label()
|
||||
return {
|
||||
"label": label,
|
||||
"timingTaskId": ls.get("timingTaskId", ""),
|
||||
"timingSubtaskId": ls.get("timingSubtaskId", ""),
|
||||
"goals": _safe_json(ls.get("Goals", "[]")),
|
||||
"version": ls.get("Version", ""),
|
||||
"user": _safe_b64(ls.get("cookie.NAME", "")),
|
||||
"email": _safe_b64(ls.get("cookie.ACCT", "")),
|
||||
"syncTimestamp": ls.get("SyncTimestamp", ""),
|
||||
}
|
||||
|
||||
|
||||
def _safe_json(s: str):
|
||||
try:
|
||||
return json.loads(s)
|
||||
except Exception:
|
||||
return s
|
||||
|
||||
|
||||
def _safe_b64(s: str) -> str:
|
||||
import base64
|
||||
try:
|
||||
return base64.b64decode(s).decode("utf-8")
|
||||
except Exception:
|
||||
return s
|
||||
|
||||
|
||||
# ── IndexedDB reader (binary WebKit IDB format) ────────────────────────────
|
||||
|
||||
def _decode_idb_value(raw: bytes) -> Optional[dict]:
|
||||
"""
|
||||
WebKit IDB values are serialized in a custom binary format.
|
||||
We extract printable strings as a best-effort approach.
|
||||
"""
|
||||
if not raw:
|
||||
return None
|
||||
# Try to extract UTF-8 readable substrings (field names + values)
|
||||
result = {}
|
||||
try:
|
||||
text = raw.decode("utf-8", errors="replace")
|
||||
# Extract key-value pairs by scanning for common field patterns
|
||||
import re
|
||||
# JSON-like fragments embedded in binary
|
||||
json_frags = re.findall(r'\{[^{}]{5,500}\}', text)
|
||||
for frag in json_frags:
|
||||
try:
|
||||
obj = json.loads(frag)
|
||||
result.update(obj)
|
||||
break
|
||||
except Exception:
|
||||
pass
|
||||
# Extract readable strings
|
||||
words = re.findall(r'[A-Za-z0-9\u4e00-\u9fff\-_@.]{3,}', text)
|
||||
if not result and words:
|
||||
result["_raw_words"] = words[:20]
|
||||
except Exception:
|
||||
pass
|
||||
return result
|
||||
|
||||
|
||||
def read_pomodoro_records(limit: int = 20) -> list:
|
||||
"""Read recent Pomodoro records from IndexedDB."""
|
||||
db_path = _indexeddb()
|
||||
if not db_path:
|
||||
return []
|
||||
conn = sqlite3.connect(str(db_path))
|
||||
c = conn.cursor()
|
||||
try:
|
||||
# Pomodoro store is id=124
|
||||
c.execute("SELECT key, value FROM Records WHERE objectStoreID=124 ORDER BY rowid DESC LIMIT ?", (limit,))
|
||||
rows = c.fetchall()
|
||||
results = []
|
||||
for key, val in rows:
|
||||
key_str = _decode_utf16(key) if isinstance(key, bytes) else str(key)
|
||||
val_decoded = _decode_idb_value(val) if isinstance(val, bytes) else {}
|
||||
results.append({"id": key_str.strip("\x00"), "data": val_decoded})
|
||||
return results
|
||||
except Exception as e:
|
||||
return [{"error": str(e)}]
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
def read_tasks(limit: int = 20) -> list:
|
||||
"""Read recent tasks from IndexedDB."""
|
||||
db_path = _indexeddb()
|
||||
if not db_path:
|
||||
return []
|
||||
conn = sqlite3.connect(str(db_path))
|
||||
c = conn.cursor()
|
||||
try:
|
||||
c.execute("SELECT key, value FROM Records WHERE objectStoreID=122 ORDER BY rowid DESC LIMIT ?", (limit,))
|
||||
rows = c.fetchall()
|
||||
results = []
|
||||
for key, val in rows:
|
||||
key_str = _decode_utf16(key) if isinstance(key, bytes) else str(key)
|
||||
val_decoded = _decode_idb_value(val) if isinstance(val, bytes) else {}
|
||||
results.append({"id": key_str.strip("\x00"), "data": val_decoded})
|
||||
return results
|
||||
except Exception as e:
|
||||
return [{"error": str(e)}]
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
def count_today_pomodoros() -> int:
|
||||
"""Count number of Pomodoro records in IndexedDB (approximate today's count)."""
|
||||
db_path = _indexeddb()
|
||||
if not db_path:
|
||||
return 0
|
||||
conn = sqlite3.connect(str(db_path))
|
||||
c = conn.cursor()
|
||||
try:
|
||||
c.execute("SELECT COUNT(*) FROM Records WHERE objectStoreID=124")
|
||||
return c.fetchone()[0]
|
||||
except Exception:
|
||||
return 0
|
||||
finally:
|
||||
conn.close()
|
||||
@@ -1,7 +1,7 @@
|
||||
# 卡若AI 技能注册表(Skill Registry)
|
||||
|
||||
> **一张表查所有技能**。任何 AI 拿到这张表,就能按关键词找到对应技能的 SKILL.md 路径并执行。
|
||||
> 69 技能 | 14 成员 | 5 负责人
|
||||
> 70 技能 | 14 成员 | 5 负责人
|
||||
> 版本:5.4 | 更新:2026-03-01
|
||||
>
|
||||
> **技能配置、安装、删除、掌管人登记** → 见 **`运营中枢/工作台/01_技能控制台.md`**。
|
||||
@@ -96,6 +96,7 @@
|
||||
| W14 | **卡猫复盘** | 水桥 | **卡猫复盘、婼瑄复盘、卡猫今日复盘、婼瑄今日、复盘到卡猫、发卡猫群** | `02_卡人(水)/水桥_平台对接/飞书管理/卡猫复盘/SKILL.md` | 婼瑄目录→目标=今年总目标+完成%+人/事/数具体→飞书+卡猫群 |
|
||||
| W15 | **接收短信** | 水桥 | **接收短信、收短信、receivesms、接码、临时号码、获取短信、拿短信、等刷新拿短信** | `02_卡人(水)/水桥_平台对接/接收短信/SKILL.md` | receivesms.co 取英国临时号→命令行抓该号最新一条短信(可 --wait 等刷新);输出号码+短信,含「要获取的网站短信类型」说明 |
|
||||
|| W16 | **飞书JSON格式** | 水桥 | **飞书json、飞书json格式、飞书block、飞书块格式、飞书文档格式、json上传飞书、飞书格式怎么写、block_type、飞书块类型、飞书callout、飞书高亮块、飞书代码块** | `02_卡人(水)/水桥_平台对接/飞书管理/飞书JSON格式_SKILL.md` | 飞书文档 JSON 格式速查/编写/上传;block_type 全覆盖、Markdown 转换对照、API 一站式参考 |
|
||||
| W17 | **复盘总结发飞书群** | 水桥 | **复盘发飞书、飞书复盘、对话总结发群、复盘总结发群** | `02_卡人(水)/水桥_平台对接/飞书管理/复盘总结发飞书群_SKILL.md` | 每次对话完成后将简洁复盘总结发到飞书群(webhook);长对话必发,强制规则 |
|
||||
|
||||
## 木组 · 卡木(产品内容创造)
|
||||
|
||||
|
||||
@@ -314,3 +314,4 @@
|
||||
| 2026-03-12 22:05:51 | 🔄 卡若AI 同步 2026-03-12 22:05 | 更新:水桥平台对接、运营中枢工作台 | 排除 >20MB: 11 个 |
|
||||
| 2026-03-12 22:33:45 | 🔄 卡若AI 同步 2026-03-12 22:33 | 更新:水桥平台对接、总索引与入口、运营中枢工作台 | 排除 >20MB: 11 个 |
|
||||
| 2026-03-12 23:10:30 | 🔄 卡若AI 同步 2026-03-12 23:10 | 更新:水桥平台对接、卡木、总索引与入口、运营中枢工作台 | 排除 >20MB: 11 个 |
|
||||
| 2026-03-12 23:12:15 | 🔄 卡若AI 同步 2026-03-12 23:12 | 更新:运营中枢、运营中枢工作台 | 排除 >20MB: 11 个 |
|
||||
|
||||
@@ -317,3 +317,4 @@
|
||||
| 2026-03-12 22:05:51 | 成功 | 成功 | 🔄 卡若AI 同步 2026-03-12 22:05 | 更新:水桥平台对接、运营中枢工作台 | 排除 >20MB: 11 个 | [仓库](http://open.quwanzhi.com:3000/fnvtk/karuo-ai) [百科](http://open.quwanzhi.com:3000/fnvtk/karuo-ai/wiki) |
|
||||
| 2026-03-12 22:33:45 | 成功 | 成功 | 🔄 卡若AI 同步 2026-03-12 22:33 | 更新:水桥平台对接、总索引与入口、运营中枢工作台 | 排除 >20MB: 11 个 | [仓库](http://open.quwanzhi.com:3000/fnvtk/karuo-ai) [百科](http://open.quwanzhi.com:3000/fnvtk/karuo-ai/wiki) |
|
||||
| 2026-03-12 23:10:30 | 成功 | 成功 | 🔄 卡若AI 同步 2026-03-12 23:10 | 更新:水桥平台对接、卡木、总索引与入口、运营中枢工作台 | 排除 >20MB: 11 个 | [仓库](http://open.quwanzhi.com:3000/fnvtk/karuo-ai) [百科](http://open.quwanzhi.com:3000/fnvtk/karuo-ai/wiki) |
|
||||
| 2026-03-12 23:12:15 | 成功 | 成功 | 🔄 卡若AI 同步 2026-03-12 23:12 | 更新:运营中枢、运营中枢工作台 | 排除 >20MB: 11 个 | [仓库](http://open.quwanzhi.com:3000/fnvtk/karuo-ai) [百科](http://open.quwanzhi.com:3000/fnvtk/karuo-ai/wiki) |
|
||||
|
||||
Reference in New Issue
Block a user