Roaster
EN / RU
SyncVibe

SyncVibe

Show HN: SyncVibe – Code with friends in the terminal, each with your own AI

AI Tools BOTH · curious1008
N/A
Revenue not available

AI Analysis

Analysis coming soon.

Similar Products

AI Tools
Figma alternative where AI works with vector primitives, not code

Figma alternative where AI works with vector primitives, not code

Show HN: Figma alternative where AI works with vector primitives, not code

Revenue N/A
AI Tools
Time Pin

Time Pin

Hi! Any history nerds here? I made Time Pin, a little game inspired by Geo Guessr but history-themed. You can play it here(it works on both desktop and mobile). Any feedback is appreciated: https://www.crazygames.com/game/time-pin Now some details: The goal is to guess the time and place that a character is from. You base your guess on some environmental photos, and on questions that you can ask the character(you have 12 questions but can only ask 5 so you have to choose carefully). The closer you are the more points you get. At the end, a portrait picture of the character is revealed, as well as educational resources to learn more about their culture and era(articles, videos, podcasts etc). The game only has 5 levels currently, but I hope to have over 100 someday. It’s tough to create levels because it requires some research, plus generating photos with AI(AI is necessary otherwise we’d only have photos starting from the 19th century when the camera was invented). My goal for the game was to create a challenge, and also maybe spark some curiosity for history.

Revenue N/A
AI Tools
LLMs consume 5.4x less mobile energy than ad-supported web search

LLMs consume 5.4x less mobile energy than ad-supported web search

The standard AI energy debate compares server-side LLM inference to a server-side Google query. I think this misses most of what actually happens on a mobile device during a real search session. I built a parametric model of the full end-to-end mobile search session: 4G/5G radio energy, SoC rendering cost for a 2.5MB page, programmatic advertising RTB auctions running in the background, and network transmission costs for both sides. Then compared it to an equivalent LLM session. Main finding across 10,000 Monte Carlo draws: on mobile, a standard LLM session uses on average 5.4x less energy than a classic ad-supported web search session. Programmatic advertising alone accounts for up to 41% of device battery drain per session. Caveats I tried to be explicit about: - Advantage disappears on fixed Wi-Fi/fiber - Reverses for reasoning models - Parametric model, not empirical device measurement. Greenspector has offered to run terminal measurements for v2 - Jevons paradox applies SSRN working paper, not peer-reviewed. Methodology and Monte Carlo distributions fully documented in the paper. Happy to defend the assumptions. DOI: 10.2139/ssrn.6287918

Revenue N/A
AI Tools
Atomic

Atomic

Show HN: Atomic – Local-first, AI-augmented personal knowledge base

Revenue N/A
AI Tools
I built a toy that plays grandma's stories when my daughter hugs it

I built a toy that plays grandma's stories when my daughter hugs it

This was a project I built for my daughter's first birthday present. For context, I'm a surgical resident in the UK by background and am currently taking a year out of training to study a masters in computer science. My daughter just turned one. There are two things she really loves: the first is particular soft toy that she just can't live without, and the other is a good story book. Her grandparents live hours away and I didn't want her to forget what they sound like between visits. I wanted her to hear them whenever she missed them. My parents brought my brother and I up with incredible stories and books from all sorts of cultures, many of the stories being passed down from their parents before them. I didn't want my daughter to miss out on that. Finally, I was sick of missing storytime with her when I had to leave for night shifts. I wanted her to hear my voice before she slept every night. For all these reasons, I decided to build Storyfriend. It's her favourite soft toy with a custom made speaker-module inside. I combined my surgical skills with the skills I was learning as a CS student. Along the way I dipped my toes into the world of 3D printing, CAD and electronics design. When she hugs the toy, it plays stories read by her grandparents. She can take the toy with her anywhere and hear the stories anytime she wants - it works offline and has internal storage. It meets my wife's strict no-screen rule (which is getting harder to stick to as the days go by). I've recorded some of the stories that we would read together, so that on nights when I'm working she still has me there to read her a bedtime story. The bit I'm most pleased with: grandparents don't need an app. They just call a phone number. The audio routes through my server and pushes to the toy over WiFi. My own 86-year old grandmother in a rural village in another country can do it by just making a regular call via her landline, as she has done for many years - no help needed, no apps required, no smartphones involved. Hardware is a BLE/wifi module with a MAX98357 chip and custome battery management system, all soldered together, placed in a 3D printed enclosure and placed into a compartment that I stitched into her cuddly toy. Firmware pulls new messages when connected to WiFi and stores them on an SD card. So far I've sold a few hand-made units to parents and grandparents who resonated with the project. Site: https://storyfriend.co.uk Would love feedback on the technical approach, the product itself, or anything else. Happy to answer questions about the build

Revenue N/A

Quick Facts

Category
AI Tools
Audience
BOTH
Founder
curious1008
Revenue data
Unknown

Share