- The ones the user sees (like a sidepanel). These often use LLM API's like OpenAI.
- The browser API ones. These are indeed local, but are often very limited smaller models (for Chrome this is Gemini Nano). Results from these would be lower quality, and of course with large contexts, either impossible or slower than using an API.
I don't understand the 'human verification' aspect.
Your docs show a simple image where the user can choose to keep a new object or not. [0] Afterwards it says: "The ones you chose to keep will be uploaded to OpenStreetMap using upload_osm.". This is uploading features automatically. The fact that it asks 'are you sure' is just silly. We all know if humans have to click yes 90% of the time, and no 10% of the time, they'll miss a lot of no's.
The image also proofs that:
- You don't see any polygons properly. You just see a an image of where the pool is. Already on the image I can see that if the polygons align to that image, it will be a total mess.
- You don't see any polygons further away from the object.
Both these points are in stereo's reply that the resulted data was a mess.
Please consider pulling the project. This will generate a lot of data that volunteers will have to check and revert.
> This will generate a lot of data that volunteers will have to check and revert.
This is just not true. The data can be easily identified with the `created_by` tag.
And I have been reviewing myself any data uploaded with the demo (with a clear different criteria on what is good enough)
If the upstream project thinks there may be a potential problem with this, that is a problem in itself. Try not to get defensive about it, just pull the project and have another go at the problem in collaboration with upstream. Perhaps parts of the project and be useful for upstream? Perhaps another workflow could make the project better?
We all strive for better open data. I upstream feel there is a risk that automated uploads could be easier with this project, creating more boring work for them which is already enough of a problem, that animosity will be a net negative for everyone in this space. Technical solutions such as new tags or opt out schemes will not solve the problem.
The OSM community has had extremely clear rules around automated edits for most of its existence. Every experienced mapper has seen first-hand the sorts of problems they can cause. The fact that it's using AI in this instance does not give any sort of exception to these rules. To emphasize, there are already AI-assisted tools that are allowed,[0] this isn't a blanket application of "no AI ever," it's about doing so properly with the right community vetting.
Edit to add: To be clear, they have since taken steps to try to resolve the concerns in question, the discussion is ongoing. I suspect at the end of this we will get a useful tool, it's just off to a rocky start.
Can I suggest also adding a tag for ML originated features that's not specific to your demo/app? Maybe this can help put extra eyes on them and/or help prevent them from polluting the DB wholesale. Maybe client apps could have a toggle to allow/reject them.
Based on previous articles[1], it's either return them on Starliner or bring them home as part of the SpaceX Crew-9 mission[2].
So the timeline is irrelevant to embarrassment. The Crew-9 mission has been rescheduled to 24 September, a decision needs to be done way beforehand. If the decision is bring them down using SpaceX, the Starliner crew will then stay until the end of the Crew-9 mission in March.
I thought the 24 September date was for them to return using that Dragon capsule within days, was it not? They would have to send two less people on Crew 9 mission, then wait the entire mission duration to return? That's so odd.
I'd imagine they would just change the mission to send an empty Dragon in March to get them, but use the launched September Dragon to return those Starliner astronauts right away.
That is the date they will send up a half full Dragon for Crew 9 Mission, which will return home in February. They aren't changing its return date, just how many people it launches with. Crew 9 can't take off until there is a free docking port so Starliner needs to be gone (crewed or not) before Dragon can launch (with 2 or 4 people depending on how Starliner leaves).
Does that mean SpaceX needs to wait for Starliner to be gone before trying anything? What happens if Starliner somehow messes up more? God I can't fathom...
IIRC the Crew 9 mission was postponed for exactly this reason. At some point, Starliner needs to be kicked out because they need the docking space. They can't keep postponing ISS missions as they please.
Exactly right. If a bunch of thrusters don't fire up again you now have a huge piece of debris at risk of colliding with the ISS. This probably gives NASA the most pause before doing an unmanned Starliner exit. Having people on board Starliner might be able to recover from more thruster problems but then there'd also be people on a death trap. So an unmanned Starliner might be risking as much life as a manned Starliner and NASA has no idea how much.
As for your earlier question: yes. Starliner has to leave before Crew 9 can dock. And their rules are it won't launch Crew 9 until there is a port for it to dock to.
On the ISS there are 4 ports on the Russian side only compatible with the Soyuz / Progress ships and 4 for the US side. 2 are "Common Berthing Mechanism" (CBD) used by Cygnus cargo modules (and the original Dragon 1) and 2 are "International Docking System Standard" (IDSS) used by newer Dragon 2, Starliner, "and future" vehicles.
The result is that before a second Dragon can launch Starliner must leave. If Butch and Suni aren't on it then Crew 9 arrives with 2 empty seats and 2 new space suits. The contingency exit plan in between Starliner leaving and Crew 9 arriving is for Butch and Suni to lay on the floor of Crew 8 Dragon without pressure suits below the 4 Dragon crew members (their Starliner suits can't plug into Dragon's systems).
It's a relatively recent convention though. Isometric means "equal measurement". The measurement along each axis has nothing necissarily to do with angles.
The only difference is if you "stack" tiles vertically they slightly offset in 2d space from ones that are offset 1x+1y.
i would say calling this projection isometric is 'a relatively recent error'
in an isometric projection all the edges of an axis-aligned cube are the same length. in the axonometric projection discussed here, which is dimetric rather than isometric, the vertical edges are of length 1 while the horizontal edges are of length √(1 + ½²) ≈ 1.118
is the fact that this projection is not isometric the reason zaxxon was called zaxxon and not zisom?
The horizontal edges are clearly measurable. The vertical (z) axis you are making a requirement that it has to show directly on top of a tile 1X and 1Y offset. That's on you.
Be careful here about confirmation bias. If you only spot 10% of the AI-written code, you'll still think you see all of it, because a 100% of the ones you spot are indeed AI-written. And the 10% you see, will indeed be painfully obvious.
At the code review stage, we care mostly that the code is good (correct, readable, etc). So if the AI-written code passes muster there, then there's nothing wrong with it being "AI-written" in our eyes.
If you care about AI-written for the sake of preventing AI usage by your developers, then I think it's already impossible to detect and prevent.
But for users:
You install Ventoy on your USB drive, and then you can drop ISO files in a folder. On startup, Ventoy opens and you can choose the ISO to boot.
This means you just have one bootable USB, and no more using tools to create bootable usb drives.
- The ones the user sees (like a sidepanel). These often use LLM API's like OpenAI.
- The browser API ones. These are indeed local, but are often very limited smaller models (for Chrome this is Gemini Nano). Results from these would be lower quality, and of course with large contexts, either impossible or slower than using an API.