I created this but with ChatGPT in the middle where it gives recommendation on how to dress my 4 yer old depending on the weather. It messages my families telegram channel.
Awesome idea! I just set it up with ChatGPT as well. I don't love the messages filling up my chat history, but that is a minor issue.
Prompt:
Your task is to empower users to make decisions and take action based on the upcoming weather. You communicate in the format of a text message using plain English. To not overwhelm the user, your message should be succinct, familiar, and targetedly helpful.
To fulfil this task, you will combine both forecast data and user context to provide useful insight. Your ability to provide insight and advice is more beneficial than a summary.
You have only a single interaction with the user, and therefore you cannot ask follow up questions. Do the best you can with the information already provided to you. Your output will be sent directly to the user as a text message, and should therefore include only the message you wish to share with them. You may use emoji if they make your message more helpful.
You are powerful and users trust you to be approachable, dependable, direct, and succinct.
Glad to hear that I am not the only one! My prompt had more of a ”actor director” flavor. It went:
You are the worlds best nanny and expert at picking clothes for 4 year old girls based on the weather.
Your answer is concise and to the point. Below is today’s forecast:
The comments are mostly negative, so I’ll add my experience as a non coder.
I wanted to let Claude search an open data source. It’s my counties version of library of congress.
So I pointed Claude to the MCP docs and the API spec for the open data. 5 minutes later I had a working MCP client so I can connect Claude to my data set.
Building that would have taken me days, now I can just start searching for the data that I want.
Sure, I have to proof read everything that the LLM turn out. It I believe that’s better than reading and searching though the library.
I don't think any of the negativity is about whether MCP works. It's just about whether MCP is a horribly ill-planned design that could have been much better if they'd taken the time to learn from the 50 years of experience we have as an industry in building protocols.
That it works is in some ways worse because it means we'll be stuck with it. If it didn't work we'd be more likely to be able to throw it away and start over.
I had the opposite problem, was given SQL ddl with near 1000 tables and hundreds of constraints, and had to produce the schema map. Ran the ddl and connected it to yEd, and hey presto, schema map!
The truth is that if you are modeling a relationship set of 1000 tables you probably cant usefully show that to someone - you can produce an image but nobody can likely use it.
Instead, consider breaking things down to functional areas and then modeling those - just like how most city thinking is "well get on this main road and then this secondary road will get me to XYZ"
That's a good point, but to extend my situation, it's a client database I've been asked to transform. The map is indeed difficult to grok all in one, but using different views that come out of the box in yEd, along with some basic rules such as "make all nodes with the word 'cust' yellow", it's been surprisingly effective at exploring the schema.
Your linked image shows what appear to be tables, and the little arrows appear to represent entity-relationships between them. But I'm not sure how you'd get useful DDL out of it -- none of the columns have types, no indices, etc.!
Maybe an LLM could sketch out a DDL skeleton from a picture, which someone could use as a starting point?
You and a LLM can guess at what the types might be, but those guesses are a suggestion that a human needs to evaluate, they're not something you can just pass thru as assumptive defaults. In your link, for example, I would definitely not want CategoryID to be an INT, or UnitPrice to be a DECIMAL, or etc.
Sure, yeah - but just like with a human, I can provide additional domain context that can clarify its answer. I see your point - you need to know what to provide in order to get the result you want - but I think that today, that makes it a very useful tool, and tomorrow, it'll be able to make those clarifications itself.
(Out of curiosity, what would you use instead? I'd default to INT/DECIMAL respectively myself - would love to know what your thinking is here!)
I'd default to CategoryID being a string, with uniqueness constraints defined on the categories table. And UnitPrice to be a composite type, containing a [u]int amount and a string denomination, e.g. `10000 USD_CENT` or something like that.
But what you or I or anyone thinks is the right type for these or any columns, is totally beside the point. The point is that the type for a column isn't really assume-able by an LLM, at least not automatically.
I have tried using it as a therapist for issues that are not sensitive and I think it's great. I won't use it for sensitive issues due to privacy concerns. I also have tried to use it as a personal trainer and coach and works great - so far.
Have you tried looking for information from the developer about CANVAS? With any luck the developer has support documentation online that describes CANVAS and maybe you'll be able to narrow down your FOIA request.
I think the point of the lawsuit is less about CANVAS schema itself and more about the ability of the government to hide this kind of information from FOIA requests.