Product Catalog
38 products tracked
Sycamore
Show HN: Sycamore β next gen Rust web UI library using fine-grained reactivity
Cerno
Show HN: Cerno β CAPTCHA that targets LLM reasoning, not human biology
EU Leadership
Show HN: EU Leadership β Live API data site comparing Europe to the world
1-Bit Bonsai, the First Commercially Viable 1-Bit LLMs
Show HN: 1-Bit Bonsai, the First Commercially Viable 1-Bit LLMs
Hyprmoncfg
Show HN: Hyprmoncfg β Terminal-based monitor config manager for Hyprland
Sundial
Show HN: Sundial β a new way to look at a weather forecast
30u30.fyi
Show HN: 30u30.fyi β Is your startup founder on Forbes' most fraudulent list?
Theguardian
JD Vance says aliens are 'demons' and details obsession with UFOs
We scored 50k PRs with AI
I'm a CTO with a ~16-person engineering team. Last year I wanted real data on what was actually shipping, not guesswork or story point theater. So we built GitVelocity. Every merged PR gets scored 0β100 by Claude across six dimensions: scope (0β20), architecture (0β20), implementation (0β20), risk (0β20), quality (0β15), perf/security (0β5). Six dimensions added up, then scaled by change size β a 10-line fix scores lower than a 500-line refactor even at the same complexity. Full formula at gitvelocity.dev/scoring-guide. After scoring 50,000+ PRs across TypeScript, Python, Rust, Go, Java, Elixir, and more, some things surprised us: Big PRs don't automatically score high. An 800-line migration with low complexity scores worse than a 200-line architectural change. Size gets you the full multiplier, but the base score still has to earn it. You can't score well without tests. The quality dimension (0β15) won't give you points without test coverage. At similar experience levels, this was the clearest separator between engineers. Juniors started outscoring some seniors. They adopted AI tools faster and took on harder problems. Once they could see their own scores, they aimed higher. We score AI-generated code the same as human-written code. Code is code. An engineer who uses AI to ship more complex work faster is more productive, and their scores reflect that. Scoring consistency was the hardest technical problem. Without reference examples anchoring each dimension, Claude's scores drifted 15+ points between runs. With 18 calibrated anchors (three per dimension at low/mid/high), we got it down to 2β4 points on the same PR. The thing we didn't expect was behavioral. We call it the Fitbit effect β the tool doesn't make you ship better code, but seeing the score does. Engineers started referencing their own scores in 1:1s unprompted, because the numbers matched what they already felt about their work. A junior who shipped a tricky concurrency fix could point to a score that proved it wasn't "just a small PR." We recently added team benchmarks (gitvelocity.dev/demo/benchmarks). Once you're scoring PRs, you can see how your team compares to others across the dataset β about 1,000 engineers on 60 teams so far. Headline's team ships faster than roughly 95% of them, which was nice to confirm but also made us wonder who the other 5% are. The competitive angle surprised us: teams that were skeptical about individual scores got genuinely curious once they could measure themselves against the field. Every score is fully visible to the engineer who wrote the PR, with per-dimension breakdowns and reasoning. There's no hidden dashboard that management sees and engineers don't. Free, BYOK (your Anthropic API key). We default to Sonnet 4.6, which scores nearly as well as Opus 4.6 at a fraction of the cost β but you can switch models if you want. Pennies per PR either way. No source code stored, diffs analyzed and discarded. Works with GitHub, GitLab, and Bitbucket. Ask me anything about the scoring methodology, how we solved calibration, or what it was actually like rolling this out to a team.
I made a free list of 100 places to promote your SaaS
Itβs a curated list of directories and launch platforms where you can submit your product and actually get traffic, backlinks, and early users I included useful data for each one so you donβt have to guess where itβs worth posting Features; 100+ directories and platforms to promote your SaaS; SEO data like domain rating and traffic; Info on whether links are dofollow or nofollow; Organized and easy to go through; Saves hours of searching and manual research; Great for getting early users and backlinks; If youβre launching or growing a SaaS and donβt know where to promote it, this should help
DeepRepo
Show HN: DeepRepo β AI architecture diagrams from GitHub repos
Timezone App
Scheduling meetings across multiple time zones has always been painful for me, especially across daylight saving time transitions. So I built a visual timeline that makes it easy to find overlapping availability. Add your locations, drag to select a time range, and share a link. Recipients see the proposed times in their local time zone automatically. A few things that might be interesting: * Location search over GeoNames with fuzzy matching using weighted edit distance, so typos and partial names still resolve correctly. * Shareable links encode the selected time range and locations in a base62 payload to keep URLs short and stateless β no database lookup needed. * Handles the annoying edge cases: DST transitions use the IANA timezone database, and 15/30-minute UTC offsets (Nepal, India, Newfoundland) work correctly. * Google Calendar and Outlook integration, but all calendar data is fetched and processed entirely in the browser. Events are never fetched or stored on the server. Would love feedback on what's useful, not useful, or could be improved!