A WYSIWYG word processor in Python
Hi all, Finding a good data structure for a word processor is a difficult problem. My notebook diaries on the problem go back 25 years when I was frustrated with using Word for my diploma thesis - it was slow and unstable at that time. I ended up getting pretty hooked on the problem. Right now I’m taking a professional break and decided to finally use the time to push these ideas further, and build MiniWord — a WYSIWYG word processor in Python. My goal is to have a native, non-HTML-based editor that stays simple, fast, and is hackable. So far I am focusing on getting the fundamentals right. What is working yet is: - Real WYSIWYG editing (no HTML layer, no embedded browser) with styles, images and tables. - Clean, simple file format (human-readable, diff-friendly, git-friendly, AI-friendly) - Markdown support - Support for Python-plugins Things that I found: - B-tree structures are perfect for holding rich text data - A simple text-based file format is incredibly useful — you can diff documents, version them, and even process them with AI tools quite naturally What I’d love feedback on: - Where do you see real use cases for something like this? - What would be missing for you to take it seriously as a tool or platform? - What kinds of plugins or extensions would actually be worth building? Happy about any thoughts — positive or critical. Greetings
AI-анализ
Анализ скоро появится.
Похожие продукты
Angel Match
База данных из 110 000+ бизнес-ангелов и венчурных инвесторов. Экономьте время на поиске инвесторов — находите подходящих по отрасли, стадии и локации.
Calendesk
Софт для онлайн-записи. Не тратьте время на согласование встреч — автоматизируйте запись, оплату и управление клиентами. Для терапевтов, коучей, юристов и сферы услуг.
Changelogfy
Принимайте лучшие решения и создавайте продукты на основе обратной связи. Единая платформа для сбора фидбека, приоритизации roadmap и публикации обновлений.
GitByBit
GitByBit is an interactive course that teaches you Git by practice right in your code editor. You follow bite-sized instructions, run real Git commands in the terminal or click through your editor’s Git interface, and the course verifies what happened. When something breaks, it tells you why and how to get unstuck. It's well-designed and illustrated.
Relvy
Hey HN! We are Bharath, and Simranjit from Relvy AI (https://www.relvy.ai). Relvy automates on-call runbooks for software engineering teams. It is an AI agent equipped with tools that can analyze telemetry data and code at scale, helping teams debug and resolve production issues in minutes. Here’s a video: [[[https://www.youtube.com/watch?v=BXr4_XlWXc0]]] A lot of teams are using AI in some form to reduce their on-call burden. You may be pasting logs into Cursor, or using Claude Code with Datadog’s MCP server to help debug. What we’ve seen is that autonomous root cause analysis is a hard problem for AI. This shows up in benchmarks - Claude Opus 4.6 is currently at 36% accuracy on the OpenRCA dataset, in contrast to coding tasks. There are three main reasons for this: (1) Telemetry data volume can drown the model in noise; (2) Data interpretation / reasoning is enterprise context dependent; (3) On-call is a time-constrained, high-stakes problem, with little room for AI to explore during investigation time. Errors that send the user down the wrong path are not easily forgiven. At Relvy, we are tackling these problems by building specialized tools for telemetry data analysis. Our tools can detect anomalies and identify problem slices from dense time series data, do log pattern search, and reason about span trees, all without overwhelming the agent context. Anchoring the agent around runbooks leads to less agentic exploration and more deterministic steps that reflect the most useful steps that an experienced engineer would take. That results in faster analysis, and less cognitive load on engineers to review and understand what the AI did. How it works: Relvy is installed on a local machine via docker-compose (or via helm charts, or sign up on our cloud), connect your stack (observability and code), create your first runbook and have Relvy investigate a recent alert. Each investigation is presented as a notebook in our web UI, with data visualizations that help engineers verify and build trust with the AI. From there on, Relvy can be configured to automatically respond to alerts from Slack Some example runbook steps that Relvy automates: - Check so-and-so dashboard, see if the errors are isolated to a specific shard. - Check if there’s a throughput surge on the APM page, and if so, is it from a few IPs? - Check recent commits to see if anything changed for this endpoint. You can also configure AWS CLI commands that Relvy can run to automate mitigation actions, with human approval. A little bit about us - We did YC back in fall 2024. We started our journey experimenting with continuous log monitoring with small language models - that was too slow. We then invested deeply into solving root cause analysis effectively, and our product today is the result of about a year of work with our early customers. Give us a try today. Happy to hear feedback, or about how you are tackling on-call burden at your company. Appreciate any comments or suggestions!