Powered by RND
PodcastsNoticiasGabriel Weinberg's Blog

Gabriel Weinberg's Blog

Gabriel Weinberg
Gabriel Weinberg's Blog
Último episodio

Episodios disponibles

5 de 28
  • What GLP-1 drug price is cost neutral to Medicare?
    As GLP-1s are studied more, their benefit profile is expanding rapidly. Acknowledging that many questions remain, a recent journal article titled The expanding benefits of GLP-1 medicines puts it like this:GLP-1 medicines, initially developed for blood glucose and weight control, improve outcomes in people with cardiovascular, kidney, liver, arthritis, and sleep apnea disorders, actions mediated in part through anti-inflammatory and metabolic pathways, with some benefits partly independent of the degree of weight loss achieved. Many millions of Americans would benefit from taking these drugs, but limited insurance coverage and high out-of-pocket costs limit their use. However, if the price was low enough to match their cost savings, then wider coverage could be justified. What price would that need to be?How can a drug be cost neutral (pay for itself)?If a drug reduces future care expenditures by more than it costs, then it pays for itself (is cost neutral). Modeling this out can get complicated, especially for drugs whose benefits accrue over many years. That’s because you need to at least consider how those cost savings unfold as well as people who stop taking the drug (adherence rate). What about GLP-1s?The Congressional Budget Office (CBO) looked into this question in detail in 2024, using these approximate assumptions:* 9-year time horizon (2026-2034)* 35% adherence (continuation) in first year, ramping up to 50% by year 9* 80% yearly continuation rate after first year of continuous use* Available to Medicare patients who are classified as obese or overweight with at least one weight-related comorbidity* $5,600/year cost (implying about ~$625/month cost if you assume a 75% reimbursement)* Savings from reduced care of $50/year in 2026, reaching $650/year in 2034CBO concludes in their report that these assumptions lead to expanding GLP-1 coverage to be very costly to the Federal government.Doesn’t Medicare prescribe GLP-1s now?Yes, but not for obesity writ large, which about doubles the qualified population. From the CBO report:In 2026, in CBO’s estimation, 29 million beneficiaries would qualify for coverage under the illustrative policy. About half of that group, or 16 million people, would have access to those medications under current law for indications such as diabetes, cardiovascular coverage, and other indications approved by the FDA in the interim. Still, CBO only expects a small percentage of eligible patients to use the drugs, due to activation and adherence. In the final year of their model (2034) they predict “about 1.6 million (or 14 percent) of the newly eligible beneficiaries would use an AOM [anti-obesity medication].”What break-even price does the CBO report imply?CBO doesn’t calculate a break-even price. They just say they expect $50 in average savings in year 1, rising to $650 in year 9, implying a 9% offset rate overall. If we assume a progression of increasing yearly savings to match these assumptions, you get a cumulative savings of about $4,000, or about $445 per year. If you assume on average the government picks up 75% of the bill, that implies a break-even drug price of about $50/month. What has changed since 2024 that would modify this CBO estimate?* Time Horizon. The CBO time horizon of 9 years is too low. They acknowledge that “from 2035 to 2044…the savings from improved health would be larger than they would be from 2026 to 2034”. So, let’s add 10 years (for a total of 19), and stipulate the last ten years average $800 in savings, rising from the year 9 savings of $650. That implies an increased average savings per year of about 1.4x.* Emerging Benefits. The CBO only accounted for weight-loss benefits, using comparisons to Bariatric surgery and other weight-loss evidence, noting that “CBO is not aware of any direct evidence showing that treatment of obesity with GLP-1-based products reduces spending on other medical services.” However, the other emerging benefits reduce conditions that are very costly to Medicare like kidney, heart, and sleep apnea complications (e.g., dialysis, heart surgery, CPAP, etc.). I think we can speculatively call this a 2x multiplier.So, then what break-even price does that imply today?$50/month (CBO original estimate) x 1.4 (for increased time horizon) x 2 (for increased benefits) =~ $140/month.That is, at $140/month, we would expect the Medicare costs to roughly equal the cost savings, and net out to 0 (be cost neutral). That’s still well below the recently negotiated prices starting in 2027 (for example, Ozempic at $274).Why are you thinking about this again?I’m seeing the expanding benefit profile and thinking we have to find a way to get these benefits to more people, as a way to generally increase our average standard of living (in this case by greatly increasing health-span/quality of life).The best way I can see to get the benefits to the most people is if it were government subsidized/provided. But obviously health care costs are a major barrier to that method, and so framing expanding benefits as cost neutral seems most politically viable.What if the price were $100/month?At $100/month, then it would be a no-brainer (assuming the above math is correct) to make available to qualified Medicare patients (say, using at least the CBO obesity criteria) since it would then be clearly making the government money.Additionally, at that price, I think you could start expanding it well beyond Medicare in waves, monitoring outcomes and cost savings. For example, you could start with programs where the government similarly runs both the cost and benefits like Medicare, such as for the military and other federal workers. Then you could expand to Medicaid / disability (with cross-state subsidies). Ultimately there could be justification to subsidize a subset of the public at large, for example people aged 55+ who will be on Medicare within the next ten years, such that the savings will be realized by the federal government and the whole program could still be cost neutral.OK great, but how to you get GLP-1s at $100/month?This may be a half-baked idea, but one approach is to offer up the market a yearly contract for expanded Medicare, and whoever shows up first gets it (to be renegotiated yearly). I don’t think this is that crazy because the manufacturing cost is estimated to be a small fraction of the list price, and the UK previously had negotiated pricing in this ballpark. The volumes would be huge, and as more companies enter the market, I imagine eventually one of them would take the offer.Thanks for reading! Subscribe for free to receive new posts or get the audio version. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com
    --------  
    8:56
  • One approach to a heavily curated information diet
    Disclaimer: This approach works for me. It may not work for you, but also maybe gives you some ideas.I find the signal to noise ratio on social media and news sites/apps too low for me to have a consistently good experience on them. So, I developed an alternative, heavily curated approach to an information diet that I’m laying out here in hope people will give me suggesions over time to improve it. It involves four main inputs:* RSS, skewed towards “most upvoted” feedsI use Reeder because it has a polished user interface, formats the feed in an aggregated, chronological timeline across devices, and has native reddit and filtering support, but there are many other RSS readers too.I subscribe to around 25 feeds and 25 subreddits through Reeder. To increase the signal to noise, I try to find “most upvoted” feeds where possible. For example, for subreddits, I usually use the top posts for the week, which you can get for any subreddit like this: https://www.reddit.com/r/economics/top.rss?t=week (just replace ‘economics’ with the subreddit of your choice). Doing so will get you on the order of five top posts per day, but you can also change ‘week’ to ‘day’ to increase that number to about twenty or to ‘month’ to decrease it to about one, which I do for some feeds. To find generally interesting subreddits I looked through the top few hundred subreddits, and then I also added some niche subreddits for specific interests I have. Below is part of my reddit list (alphabetical). You can see I have some really large subreddits (technology, science, todayilearned) mixed in with more niche ones (singularity, truereddit), as well as communities (rootsofprogress, slatestarcodex) and hobbies (phillyunion, usmnt). Getting about twenty five across a range of your interests makes a good base feed.Many publications still have RSS feeds if you search for publication name+RSS. If they don’t, it’s likely RSS.app or Feedspot has made one you can use instead. There is usually support through one of these methods for sub-section publication feeds, for example the tech section.Here are some other examples of non-reddit “most upvoted” feeds that might be more widely appealing:* Hacker News RSS - for example, I added the 300 and 600 points feeds, meaning you get notified when a story hits 300 or 600 points (you can pick any number).* NYT RSS - they have most emailed/shared/viewed* Techmeme RSS - curated by the techmeme team* LessWrong RSS - they have a curated feedThen I also just consume the main RSS feeds of some really high signal publications like Ars Technica (full articles come through for subscribers), The Information, Quanta, etc. Even with all this curation, the signal to noise for me isn’t that great. I skim through the timeline mostly, but I do end up getting a bunch of interesting articles this way every day. I do use the filtering feature of Reeder to drop out some really low hit keywords.* PodcastsI subscribe to about 20 podcasts via Overcast. I like the Overcast “Voice Boost (clear, consistent volume)” and “Smart Speed (shorter silences)” features as well as the ability to do a custom playback speed for each podcast. The signal to noise ratio is better here than the RSS feeds, but I still don’t listen to every episode, and for ones I do I often skip around. I like having a queue to listen to in the car and at the gym.I find new podcast discovery pretty hard. I’ve looked through the Overcast top podcasts lists in all the different categories, and tried lots of them, but not many stick for me. * Email newslettersI subscribe to about the same amount (20-25) of email newsletters, some daily but most weekly or less. Signal/noise is less than podcasts, but greater than the RSS feeds. I’d guess my hit rate is about 20% in terms of reading them through vs. maybe 50% for podcasts listening through and 5% for the full RSS amalgamation reading through.About half of the email newsletters I subscribe to are through Substack and half are direct from websites/organizations. * People sending me linksI really appreciate when people send me curated links, which happens less than I’d like but I can’t complain because the signal to noise here is the highest with a hit ratio maybe 80%. I try to encourage it by saying thank you and responding when I have thoughts.With those four inputs, I feel decently covered, but sometimes I do wonder what I’m missing out on and occasionally relapse back to going directly to a news or social media app and skimming the front page. This method of course depends on having a good list of feeds, podcasts, and newsletters. But in general, I’m personally happier with this approach, though of course your mileage my vary. If you’re doing something similar and have any ideas on process tweaks or specific recommendations for feeds, podcasts, or newsletters, I’d love to hear them.Thanks for reading! Subscribe for free to receive new posts or get the audio version. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com
    --------  
    6:17
  • China has a major working-age population advantage through at least 2075
    In response to “A U.S.-China tech tie is a big win for China because of its population advantage,” I received feedback along the lines of shouldn’t we be looking at China’s working-age population and not their overall population? I was trying to keep it simple in that post, but yes, we should, and when we do, we find, unfortunately, that China’s population advantage still persists. Here’s the data:Currently, China’s working-age population is over 4 times the U.S.According to Our World in Data, China’s working-age population is 983 million to the U.S.’s 223 million, or 4.4x.In 2050, despite being in rapid decline, China’s working-age population is still projected to be over 3 times the U.S.’sThe projections put China’s 2050 working-age population at 745 million to the U.S.’s 232 million, or 3.2x.In 2075, noting projections are more speculative, China’s working-age population is still projected to be about double the U.S.’sThe projections put China’s 2075 working-age population at 468 million to the U.S.’s 235 million, or 2.0x.Noah Smith recently delved into this rather deeply in his post “China’s demographics will be fine through mid-century” noting:China’s economic might is not going to go “poof” and disappear from population aging; in fact, as I’ll explain, it probably won’t suffer significant problems from aging until the second half of this century.And even in the second half, you can’t count on their demographic decline then either, both because even by 2075 their working-age population is still projected to be double the U.S.’s under current conditions, but also because those conditions are unlikely to hold. As Noah also notes:Meanwhile, there’s an even greater danger that China’s leaders will panic over the country’s demographics and do something very rash…All in all, the narrative that demographics will tip the balance of economic and geopolitical power away from China in the next few decades seems overblown and unrealistic. OK, why does this demographic stuff matter again?Check out my earlier article for details, but here’s a summary.[A] U.S.-China tech tie is a big win for China because of its population advantage. China doesn’t need to surpass us technologically; it just needs to implement what already exists across its massive workforce. Matching us is enough for its economy to dwarf ours. If per person output were equal today, China’s economy would be over 4× America’s because China’s population is over 4× the U.S. That exact 4× outcome is unlikely given China’s declining population and the time it takes to diffuse technology, but 2 to 3× is not out of the question. China doesn’t even need to match our per-person output: their population will be over 3× ours for decades, so reaching ⅔ would still give them an economy twice our size since 3 × ⅔ = 2.…With an economy a multiple of the U.S., it’s much easier to outspend us on defense and R&D, since budgets are typically set as a share of GDP. …What if China then starts vastly outspending us on science and technology and becomes many years ahead of us in future critical technologies, such as artificial superintelligence, energy, quantum computing, humanoid robots, and space technology? That’s what the U.S. was to China just a few decades ago, and China runs five-year plans that prioritize science and technology.…Our current per person output advantage is not sustainable unless we regain technological dominance. …[W]e should materially increase effective research funding and focus on our own technology diffusion plans to upgrade our jobs and raise our living standards.Thanks for reading! Subscribe for free to receive new posts or get the audio version. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com
    --------  
    4:27
  • Total Factor Productivity needs a rebrand (and if you don't know what that is you probably should).
    If you don’t know about Total Factor Productivity (TFP), you probably should. It’s an economic concept that is arguably the most important driver of long-term economic prosperity. An International Monetary Fund (IMF) primer on TFP explains it like this (emphasis added):It’s a measure of an economy’s ability to generate income from inputs—to do more with less…If an economy increases its total income without using more inputs…it is said to enjoy higher TFP [Total Factor Productivity].TFP is an important macroeconomic statistic [because] improvements in living standards must come from growth in TFP over the long run. This is because living standards are measured as income per person—so an economy cannot raise them simply by adding more and more people to its workforce.Meanwhile, economists have amassed lots of evidence that investments in capital have diminishing returns. This leaves TFP advancement as the only possible source of sustained growth in income per person, as Robert Solow, the late Nobel laureate, first showed in a 1957 paper.So, it’s important. Critically important to long-term progress. To learn more about TFP, check out the full IMF primer referenced above and then this post I wrote about TFP titled “The key to increasing standard of living is increasing labor productivity,” which also has more links embedded in it. It explains how the only sustainable way to increase TFP is to “to invent new technology that enables workers to do more per hour.” And this is why I’m always going on and on about increasing research funding.Let’s assume for a second that most people want more prosperity and that long-term prosperity does indeed primarily flow through Total Factor Productivity. Then why aren’t we talking about TFP a lot more? Why isn’t Total Factor Productivity front and center in our political agendas?I think there are a host of reasons for that, including those I outlined in the paradox of progress. But another even simpler reason has to be that Total Factor Productivity is a terrible, inscrutable name, at least from the perspective of selling the concept to the mainstream public.Every word of it isn’t great. It starts with “total,” which isn’t as off-putting as the other words, but doesn’t add much especially as the first word, let alone the fact that economists quibble that it isn’t an actual total. “Factor” seems like a math word and doesn’t add much either. And then you have “productivity,” which is confusing to most people because it has an unrelated colloquial meaning, and from a political perspective it also codes as job-cutting which is inherently unappealing.Now, lots of economics jargon has similar problems, case in point “Gross Domestic Product” (GDP). Given GDP hasn’t been rebranded, I doubt TFP will either. That said, I think for anyone trying to communicate this concept to the public, we shouldn’t take the TFP name or acronym as a given, but try to use something more appealing and inherently understandable.I’m looking to switch to something else but not sure to exactly what. My thinking so far has led me to work in the words “prosperity” or “innovation” directly like:* Prosperity Driver * Prosperity Component* Innovation MultiplierDo you have any other suggestions? Thanks for reading! Subscribe for free to receive new posts or get the audio version. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com
    --------  
    4:10
  • Is consumer AI heading for a duopoly?
    Fifteen years ago Google started using their search monopoly to create a browser monopoly by pushing people to use Chrome through in-product promotions in Google search. It worked. Now they’re repeating that same playbook for consumer AI with Gemini and it’s working again. In the last 30 days, Gemini has been downloaded about the same amount of times as ChatGPT, and nothing else is even close.While ChatGPT had a massive head start, Google is rapidly turning consumer AI into a duopoly. Despite endless headlines mentioning Anthropic, Perplexity, and others, none of the alternatives seem to be meaningfully gaining market share right now relative to ChatGPT, except Gemini. The reason is simple: the others don’t have the distribution channels to match Google’s.The next phase of consumer AI competition will favor Google even more. As I recently noted, consumer internet workflows increasingly span across search, browsing, and AI. Who has the most entrenched position in search and browsing to complement consumer AI? Google. For example, their monopoly browser (Chrome) can get AI features to most consumers the fastest.Google’s ability to leverage its market position to distribute its own AI products continues unabated, and U.S. v. Google made clear that distribution powers a scale advantage. That is, Google’s search assets are not easily replicable because of the vast user engagement data Google alone possesses. And an increasing number of sites don’t even allow web crawlers or access to their content except for Google.We shouldn’t settle for a shift from Google’s search monopoly to an AI duopoly. Thus far, regulators have only addressed Google’s advantages at the margins. There remains time to address these dynamics and unlock innovation.One possible (non-regulatory) response is deeper partnerships and consolidation between the other AI companies, search engines, and browsers in an effort to compete more with more scale in this new market. This has already started around the edges, for example we (at DuckDuckGo) have partnered with You.com to develop a better news search index and are looking to partner with others to advance the web index we’ve been working on, as well as to enhance our browser and AI features. But the market is ripe for larger deals.To see where those might come from, here’s the top mobile (consumer AI is used primarily on mobile) search engines in the U.S. according to Cloudflare, who sees the most traffic. Google is #1, followed by us (DuckDuckGo) at #2, then Yahoo and Bing. Everyone else is sub 1%.Similarly, here’s the top mobile browsers in the U.S. Safari and Chrome dominate, followed by Samsung Internet, DuckDuckGo and Firefox above 1%.And finally, here are the top 20 consumer web destinations in the U.S., according to SEMRush. The top ten are Google, Reddit, Facebook, Amazon, Yahoo, Wikipedia, Instagram, DuckDuckGo, and ChatGPT.Partnerships and consolidation between these companies could produce some more effective competition. So far, consumer AI has actually driven more traditional search and browser usage, not less. We see that in our numbers, and SparkToro reports similar for others. AI is driving people to do more information seeking in general, and as mentioned those workflows increasingly span across search, browsing, and AI. The best experiences seamlessly blend all three in the browser, and so it is natural companies with assets in some of the three areas would want to partner with companies with non-overlapping assets. Additionally, a company with a large consumer user base could help directly drive distribution of consumer AI, browsers, and search engines, especially if that company has unique content assets.A duopoly in consumer AI will not just be bad for innovation, but will further erode privacy. That’s why I believe DuckDuckGo will remain an important alternative regardless of what happens, but I’m still a little hopeful that innovative partnerships and consolidation could challenge the rapidly emerging consumer AI duopoly.Thanks for reading! Subscribe for free to receive new posts or get the audio version. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit gabrielweinberg.com
    --------  
    5:15

Más podcasts de Noticias

Acerca de Gabriel Weinberg's Blog

Tech, policy, and business insights from the DuckDuckGo founder and co-author of Traction and Super Thinking. gabrielweinberg.com
Sitio web del podcast

Escucha Gabriel Weinberg's Blog, En Boca de León y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v8.0.7 | © 2007-2025 radio.de GmbH
Generated: 12/8/2025 - 12:49:41 AM