Large Scale Article Extract of Newspapers 1730s-1960s
Hello HN, over the past 7 months I've spent nearly 3,000 hours on building SNEWPAPERS, the first historical newpaper archive with full-text extractions, nearly perfect OCR, a vast categorization taxonomy and of course with semantic and agentic search capabilities. Problem: I wanted to search through newspaper archives, but when I tried every service only lets you search for keywords and dates, and gives you back raw images of the papers, and too many of them with no context. A sea of noise. Solution: I taught machines how to read the newspapers and so far I've extracted the content from > 600k pages (about 5TB) from the Chronicling America collection. Problems I had to deal with were an infinite variety of layouts, font sizes, image scan qualities, resolutions, aspect ratios, navigating around the images on the page. I also had to figure out how to get OCR to be nearly perfect so people wouldn't hate reading the extracts. I stitched together a multi-model pipeline (layout tech, ocr tech, llm, vllm) with heuristics to go from layout -> segmentation -> classification. I put it all in OpenSearch / Postgres and made it semantically searchable and also put an agentic search tool on top that knows how to use the API really well and helps you write queries to find what you're looking for. Happy to discuss AWS architecture and scaling as well, that was tough! If you have five minutes and you just want to jump in and have your own personalized experience, what I would suggest is: Before searching for anything, go to the Sleuth page Ask it about anything from 1736 to 1963, maybe 1 or 2 follow up questions Then go to the search page so you can see the queries it wrote for you (bottom left "saved queries") and uncover more info on whatever it is you're interested in If you think it's cool and you want to learn more, then there's about 10 minutes of video guides on the various capabilities in "Guide" on the nav bar Some other people have also taken a crack at this, notably: https://dell-research-harvard.github.io/resources/americanst... (very good attempt) https://labs.loc.gov/work/experiments/newspaper-navigator/ (focused on images)
AI Analysis
Analysis coming soon.
Similar Products
Capgo
Instant updates for Capacitor apps. Ship fixes in minutes, not weeks. Push OTA updates to users without app store delays.
OpenAlternative
Open source alternatives to popular software. Over 1 million users replaced their proprietary tools with open source software. Discover the best alternatives and join the movement.
Browser-based light pollution simulator using real photometric data
Hi HN — author here. iesna.eu is a browser-based ecosystem for working with photometric data: parsing standard luminaire files (LDT/EULUMDAT, IES LM-63, Oxytech, ATLA-S001), running design calculations against EN 13201 / ANSI/IES RP-8 / CJJ 45 / IES-IDA MLO, and (the part I most want to show off here) rendering real urban scenes in Bevy with the photometric data driving actual streetlight behavior, including sky-glow contribution. The Skyglow Analysis demo loads a real LDT file into a Bevy scene (Khronos Bistro test asset). The luminaire's intensity distribution drives the streetlight rendering directly — no fudging — and the sky-glow grade updates live as you adjust the uplight percentage. Swap to a full-cutoff fixture and the sky goes from F (Severe) back to A (Excellent). You can see the difference on the buildings as well as in the sky. Stack: Rust core (eulumdat-rs and friends, ~20 crates handling photometric formats), Bevy for the 3D rendering, WASM for browser deployment. No backend; everything runs client-side. About a thousand lines of new code on top of the existing photometric library to make the Bevy integration work. Things I'd love feedback on: The atmospheric scattering model is currently single-scattering Rayleigh+Mie. Is that defensible for the use case, or should I move toward multi-scattering? The Bistro test scene works well visually but isn't a controlled environment. Anyone know of a public urban geometry asset that's more typical of real road-lighting evaluation? The CJJ 45 implementation (China's national road lighting standard) is the only one I've had to reverse-engineer from translated PDFs. If anyone has primary-source experience with it, I'd value a sanity check. Open-source on GitHub (eulumdat-rs and the related crates). Crates.io: eulumdat
Pu.sh
I originally was just messing with pi-autoresearch. Gave it a sample task to build the most portable coding agent. First cut was 6 KB of shell. Great for one-shots, unusable interactively. I was shocked it actually worked. Started building up -- adding features — but with a self-imposed rule: no new dependencies, and sub 500 LOC. This thing had to be truly portable. Just sh, curl, awk. System primitives only. Which means I did some genuinely disgusting things in awk, including JSON parsing and the OpenAI Responses tool loop with reasoning items carried across turns. It's now ~400 lines. In the box: Anthropic + OpenAI, 7 tools (bash, read, write, edit, grep, find, ls), REPL, auto-compaction, checkpoint/resume, pipe mode, 90 no-API tests. Not in the box: TUI, streaming, images, OAuth, Windows, dignity. Two honest things: 1. I stole/modified the system prompt and the architecture. Pi/Claude/Codex wrote the awk. I cannot read most of this code. This wasn't possible for me a year ago. 2. Heavily inspired by Pi (pi.dev) — same 7-tool surface, same exact-text edit model. Credit where it's due. Pi is awesome -- you should probably use them. The agent loop itself is tiny. Almost everything else in a "real" agent CLI is DX and hardening. You can probably build your own harness exactly how you like it. Mario Zechner's AI Engineer talk on taking back control of your tools nudged me here. The name is because it's a .sh file. The other thing it sounds like is, regrettably, also accurate.
Spec27
Hi HN! We’re a team of ML validation specialists and we’ve been building /Spec27, a tool for testing whether AI agents still do their job safely and reliably as models, prompts, tools, and surrounding systems change. We started working on this because a lot of current LLM evaluation work seems aimed at scoring general model behavior, while many teams are deploying systems that have a specific mission to fulfill. Many of the tools also assume you have full access to the agent stack and traces so you can place SDKs and Gateways, but a lot of agents are being created on vendor platforms where this isn’t possible. As a result, we approaches it from the outside in: all tests just run to the primary interfaces of an Agent and don’t assume anything about internals. The other important things about the approach is spec-driven. Instead of treating testing as a one-off benchmark or static eval set, we let teams define reusable specifications for the behavior they want from an agent, then generate tests against those specs. With this you can automatically generate adversarial and robustness checks, so you can see what an agent is sensitive to and what kinds of changes cause it to fail. We’ve worked on validation for other AI systems before, including vision and tabular workflows, and /Spec27 is our new product for language-model-based agents. Currently in early access, so we’d love feedback! The current version is strongest for single-turn agent and application validation. We do not fully support multi-turn interactions yet, and better telemetry/tool-call integration is still on our roadmap. We’ve made the product open to try for HN readers, with a sample flow so it’s easy to poke around without much setup. We’d especially love feedback from people deploying internal agents, vendor agents, or other AI systems where reliability matters more than benchmark scores.