Thank you! We are not far in time (10 years?) from being able to provide a complete set with every computing device, though we would need them all converted and formatted, etc., and many will (unfortunately) think we should just use the cloud instead.
I will add this study published in 2010, which estimated 129 million 'editions' published in all languages. Any text can have multiple editions (I don't recall how 'edition' was defined, but IIRC the study was of published objects not of texts), and "includes serials and sets but excludes kits, mixed media, and periodicals such as newspapers". It also contains a useful description of methods to bulk process bibliographic data. (Though maybe this is the Google study you talked about, because IIRC their data is involved.)
I will add this study published in 2010, which estimated 129 million 'editions' published in all languages. Any text can have multiple editions (I don't recall how 'edition' was defined, but IIRC the study was of published objects not of texts), and "includes serials and sets but excludes kits, mixed media, and periodicals such as newspapers". It also contains a useful description of methods to bulk process bibliographic data. (Though maybe this is the Google study you talked about, because IIRC their data is involved.)
J-B Michel, et al, "Quantitative Analysis of Culture Using Millions of Digitized Books", Science (16 Dec 2010) https://www.science.org/doi/10.1126/science.1199644
By the way, I greatly appreciate your informed comments. It's some of the most substantive and interestin reading on HN.