Roaster
EN / RU
State of the Art of Coding Models, According to Hacker News Commenters

State of the Art of Coding Models, According to Hacker News Commenters

Hello HN, I was away from my computer for two weeks, and after coming back and reading the latest discussions on HN about coding assistants (models, harnesses), I felt very out of the loop. My normal process would have been to keep reading and figure out the latest and greatest from people's comments, but I wanted to try and automate this process. Basically the goal is to get a quick overview over which coding models are popular on HN. A next iteration could also scan for harnesses that people use, or info on self-hosting or hardware setups. I wrote a short intro on the page about the pipeline that collects and analyzes the data, but feel free to ask for more details or check the Google Sheet for more info. https://hnup.date/hn-sota

Developer Tools BOTH · yunusabd
N/A
Revenue not available

AI Analysis

Analysis coming soon.

Similar Products

Developer Tools
Capgo

Capgo

Instant updates for Capacitor apps. Ship fixes in minutes, not weeks. Push OTA updates to users without app store delays.

$15.2K /mo
Developer Tools Easy to clone
OpenAlternative

OpenAlternative

Open source alternatives to popular software. Over 1 million users replaced their proprietary tools with open source software. Discover the best alternatives and join the movement.

$6.7K /mo
Developer Tools
Large Scale Article Extract of Newspapers 1730s-1960s

Large Scale Article Extract of Newspapers 1730s-1960s

Hello HN, over the past 7 months I've spent nearly 3,000 hours on building SNEWPAPERS, the first historical newpaper archive with full-text extractions, nearly perfect OCR, a vast categorization taxonomy and of course with semantic and agentic search capabilities. Problem: I wanted to search through newspaper archives, but when I tried every service only lets you search for keywords and dates, and gives you back raw images of the papers, and too many of them with no context. A sea of noise. Solution: I taught machines how to read the newspapers and so far I've extracted the content from > 600k pages (about 5TB) from the Chronicling America collection. Problems I had to deal with were an infinite variety of layouts, font sizes, image scan qualities, resolutions, aspect ratios, navigating around the images on the page. I also had to figure out how to get OCR to be nearly perfect so people wouldn't hate reading the extracts. I stitched together a multi-model pipeline (layout tech, ocr tech, llm, vllm) with heuristics to go from layout -> segmentation -> classification. I put it all in OpenSearch / Postgres and made it semantically searchable and also put an agentic search tool on top that knows how to use the API really well and helps you write queries to find what you're looking for. Happy to discuss AWS architecture and scaling as well, that was tough! If you have five minutes and you just want to jump in and have your own personalized experience, what I would suggest is: Before searching for anything, go to the Sleuth page Ask it about anything from 1736 to 1963, maybe 1 or 2 follow up questions Then go to the search page so you can see the queries it wrote for you (bottom left "saved queries") and uncover more info on whatever it is you're interested in If you think it's cool and you want to learn more, then there's about 10 minutes of video guides on the various capabilities in "Guide" on the nav bar Some other people have also taken a crack at this, notably: https://dell-research-harvard.github.io/resources/americanst... (very good attempt) https://labs.loc.gov/work/experiments/newspaper-navigator/ (focused on images)

Revenue N/A
Developer Tools
Browser-based light pollution simulator using real photometric data

Browser-based light pollution simulator using real photometric data

Hi HN — author here. iesna.eu is a browser-based ecosystem for working with photometric data: parsing standard luminaire files (LDT/EULUMDAT, IES LM-63, Oxytech, ATLA-S001), running design calculations against EN 13201 / ANSI/IES RP-8 / CJJ 45 / IES-IDA MLO, and (the part I most want to show off here) rendering real urban scenes in Bevy with the photometric data driving actual streetlight behavior, including sky-glow contribution. The Skyglow Analysis demo loads a real LDT file into a Bevy scene (Khronos Bistro test asset). The luminaire's intensity distribution drives the streetlight rendering directly — no fudging — and the sky-glow grade updates live as you adjust the uplight percentage. Swap to a full-cutoff fixture and the sky goes from F (Severe) back to A (Excellent). You can see the difference on the buildings as well as in the sky. Stack: Rust core (eulumdat-rs and friends, ~20 crates handling photometric formats), Bevy for the 3D rendering, WASM for browser deployment. No backend; everything runs client-side. About a thousand lines of new code on top of the existing photometric library to make the Bevy integration work. Things I'd love feedback on: The atmospheric scattering model is currently single-scattering Rayleigh+Mie. Is that defensible for the use case, or should I move toward multi-scattering? The Bistro test scene works well visually but isn't a controlled environment. Anyone know of a public urban geometry asset that's more typical of real road-lighting evaluation? The CJJ 45 implementation (China's national road lighting standard) is the only one I've had to reverse-engineer from translated PDFs. If anyone has primary-source experience with it, I'd value a sanity check. Open-source on GitHub (eulumdat-rs and the related crates). Crates.io: eulumdat

Revenue N/A
Developer Tools
Pu.sh

Pu.sh

I originally was just messing with pi-autoresearch. Gave it a sample task to build the most portable coding agent. First cut was 6 KB of shell. Great for one-shots, unusable interactively. I was shocked it actually worked. Started building up -- adding features — but with a self-imposed rule: no new dependencies, and sub 500 LOC. This thing had to be truly portable. Just sh, curl, awk. System primitives only. Which means I did some genuinely disgusting things in awk, including JSON parsing and the OpenAI Responses tool loop with reasoning items carried across turns. It's now ~400 lines. In the box: Anthropic + OpenAI, 7 tools (bash, read, write, edit, grep, find, ls), REPL, auto-compaction, checkpoint/resume, pipe mode, 90 no-API tests. Not in the box: TUI, streaming, images, OAuth, Windows, dignity. Two honest things: 1. I stole/modified the system prompt and the architecture. Pi/Claude/Codex wrote the awk. I cannot read most of this code. This wasn't possible for me a year ago. 2. Heavily inspired by Pi (pi.dev) — same 7-tool surface, same exact-text edit model. Credit where it's due. Pi is awesome -- you should probably use them. The agent loop itself is tiny. Almost everything else in a "real" agent CLI is DX and hardening. You can probably build your own harness exactly how you like it. Mario Zechner's AI Engineer talk on taking back control of your tools nudged me here. The name is because it's a .sh file. The other thing it sounds like is, regrettably, also accurate.

Revenue N/A

Quick Facts

Category
Developer Tools
Audience
BOTH
Founder
yunusabd
Revenue data
Unknown

Share