This seems a meaningless project as the system prompt of these models are changing often. I suppose you could then track it over time to view bias... Even then, what would your takeaways be?
Even then, this isn't even a good use case for an LLM... though admittedly many people use them in this way unknowingly.
edit: I suppose it's useful in that it's a similar to an "data inference attack" which tries to identify some characteristic present in the training data.
I think you mentioned it, when a large number of people outsource their thinking, relationship or personal issues and beliefs to chatgpt, it important that we are aware and don't because of how easy it is to get the LLMs to change their answers based on how leading your questions are due to their sycophancy. HN crowd mostly knows this but general public maybe not
Even then, this isn't even a good use case for an LLM... though admittedly many people use them in this way unknowingly.
edit: I suppose it's useful in that it's a similar to an "data inference attack" which tries to identify some characteristic present in the training data.