Thanks for sharing this research. A couple concern I’d have about many of the ideas for “democratic” engagement through LLMs:
1. Successful democracy probably requires an informed, engaged public. Civic engagement is important not only because of what it provides to the government (input and info from citizens) but also because it encourages citizens to take ownership of their government (and hopefully become more informed during the process). If I know that an LLM will voice my opinion (accurately) on my behalf whenever I skip an election (or town meeting), why should I take the time to research the issues and go to the polls/meeting?
2. Vulnerability to (perceived) manipulation. Even authoritarian governments typically hold elections and engage in some performative displays of apparent democracy. The more complicated a system of democratic input is, the harder it is for the public to understand and monitor it. How can there be transparency/outside monitoring in tallying votes or compiling public comments generated by LLMs? Is the public really going to believe in their legitimacy? I’d be extremely skeptical if my mayor said that an important decision for the city had been shaped by “democratic input” provided by LLMs. Ordinary democratic processes are also subject to manipulation, of course, but the attack surface seems much broader when you add LLM agents.
Thank you for reading, and for the critical engagement. And thanks for being the first commenter, my friend. :)
Ok. You make two good points about the challenges of integrating LLMs into democratic input. With regards to each point:
1. I agree that successful democracy probably requires an informed, engaged public. However, I think there are plenty of opportunities for LLM's to "augment" the democratic input process by enabling new streams of input. Your "digital twin" could accurately represent your preferences in parallel public forums that your limited time wouldn't have you allowed to provide input on otherwise. I think LLMs could also help in improving the information ecosystem, research quality on policy issues. I guess for me it seems pretty context dependent on whether or not the use of LLMs enhances democratic input. However, I also agree that there's a significant risk of "automation" of democratic input leading to civic disengagement, but, I think there's just lots of places where it helps/ could help.
2. Yes. Seems like both technical and governance issues are bundled up in this one. I focused on the voting case for this post because I think it is illustrative of how some specific important aspect of democratic input (voting) has evolved alongside technological innovation, but basically the high level process is the same. Each individual's submits a ballot that's aggregated in some way. I think the two empirical pieces I looked at also point to some real limitations of the systems can even reliably do here. The other thing I'd say here is that I think the AI systems are being improved and will be normalized in a variety of uses soon. I think there is a lot of progress in making them more secure and developing open source systems for specific uses.
One question I had for you is: what do you think of my argument that is something like "simulations of high enough fidelity may be accurately be thought of as a type of democratic input"?
I'm very much "thinking out loud", because I hadn't pondered most of these specific topics prior to reading your post. So thanks again for the thought-provoking material!
In terms of your question, I am not totally convinced of the premise that we'll ever get to simulations of sufficient fidelity. But putting that concern aside, I guess I could believe it is a type of democratic input, but probably not a very potent one. By way of analogy, I think citizen surveys are a relatively weak (still useful) form of democratic input and don't displace the need for more active forms of citizen engagement.
One question I have: Is democratic input supposed to be a mere conveyance of information (preferences), or is it a more relational endeavor? If it is relational, I'm not sure my digital twin can accomplish it.
Another thing I've been pondering: Let's say my digital twin attends a city council meeting on my behalf and offers public comment. Three different versions of myself might offer comment, and we'd have to choose which one the LLM would represent: (1) I attend the meeting without preparation and am on my phone paying no attention to the meeting until it's my turn to speak; (2) I don't prepare, but I listen to the rest of the meeting and learn some things that affect what I say during my comment; (3) I read about the topic for a few hours before attending the meeting so that I will have a particularly informed comment to offer. Should digital twins represent what the public actually wants? Or what they would want if they became more enlightened?
This kind of choice is itself one vector of potential manipulation by elected officials: "The public comment simulation determined that red light cameras are supported by the majority" but only because the simulation assumed the public first read about the benefits of the cameras. This kind of political manipulation is also part of what I meant before when I talked about the "attack surface"--not just cybersecurity (though I meant cybersecurity too!).
This is great. Thanks for sharing! I have a couple more thoughts here:
- Your first question about whether democratic input is more about information sharing or is it more relational is spot on. I think it's both, or better put, I think striving for the stronger version (relational) is important. At the same time, it seems to me there are lots of forums where the information sharing would suffice, particularly if the other most likely output is no input or no direct representation in that forum. In this way, I could imagine "digital twins" voting on all sorts of proposals in a whole wide set of forums (not necessarily physical ones) that improves direct representation of a detailed approximation of my preferences. For example, maybe local issues are addressed in a more relational manner, but issues at more of a distance would benefit from simply having more democratic input. Either way, it seems like it's still ideal if the individuals themselves are still actively engaged in the democratic endeavor. I guess I do come down on the side that the informational aspect is more important than the relational endeavor at larger scales of governance.
- Your second point about which "version" of you your "digital twin" should represented is an important one as well. Although, it seems straightforward enough to resolve these or give people a choice on their "digital twin settings." Maybe I want my "digital twin " to be more an enlightened version of me, but maybe you're set in your ways and don't want to change your mind.
- You third point about manipulation is another good point. I do think your specific case is a plausible problem. But, I'm optimistic that one could make this process technically transparent by design.
Thanks for sharing this research. A couple concern I’d have about many of the ideas for “democratic” engagement through LLMs:
1. Successful democracy probably requires an informed, engaged public. Civic engagement is important not only because of what it provides to the government (input and info from citizens) but also because it encourages citizens to take ownership of their government (and hopefully become more informed during the process). If I know that an LLM will voice my opinion (accurately) on my behalf whenever I skip an election (or town meeting), why should I take the time to research the issues and go to the polls/meeting?
2. Vulnerability to (perceived) manipulation. Even authoritarian governments typically hold elections and engage in some performative displays of apparent democracy. The more complicated a system of democratic input is, the harder it is for the public to understand and monitor it. How can there be transparency/outside monitoring in tallying votes or compiling public comments generated by LLMs? Is the public really going to believe in their legitimacy? I’d be extremely skeptical if my mayor said that an important decision for the city had been shaped by “democratic input” provided by LLMs. Ordinary democratic processes are also subject to manipulation, of course, but the attack surface seems much broader when you add LLM agents.
Thank you for reading, and for the critical engagement. And thanks for being the first commenter, my friend. :)
Ok. You make two good points about the challenges of integrating LLMs into democratic input. With regards to each point:
1. I agree that successful democracy probably requires an informed, engaged public. However, I think there are plenty of opportunities for LLM's to "augment" the democratic input process by enabling new streams of input. Your "digital twin" could accurately represent your preferences in parallel public forums that your limited time wouldn't have you allowed to provide input on otherwise. I think LLMs could also help in improving the information ecosystem, research quality on policy issues. I guess for me it seems pretty context dependent on whether or not the use of LLMs enhances democratic input. However, I also agree that there's a significant risk of "automation" of democratic input leading to civic disengagement, but, I think there's just lots of places where it helps/ could help.
2. Yes. Seems like both technical and governance issues are bundled up in this one. I focused on the voting case for this post because I think it is illustrative of how some specific important aspect of democratic input (voting) has evolved alongside technological innovation, but basically the high level process is the same. Each individual's submits a ballot that's aggregated in some way. I think the two empirical pieces I looked at also point to some real limitations of the systems can even reliably do here. The other thing I'd say here is that I think the AI systems are being improved and will be normalized in a variety of uses soon. I think there is a lot of progress in making them more secure and developing open source systems for specific uses.
One question I had for you is: what do you think of my argument that is something like "simulations of high enough fidelity may be accurately be thought of as a type of democratic input"?
Thanks again for sharing your thoughts. :)
I'm very much "thinking out loud", because I hadn't pondered most of these specific topics prior to reading your post. So thanks again for the thought-provoking material!
In terms of your question, I am not totally convinced of the premise that we'll ever get to simulations of sufficient fidelity. But putting that concern aside, I guess I could believe it is a type of democratic input, but probably not a very potent one. By way of analogy, I think citizen surveys are a relatively weak (still useful) form of democratic input and don't displace the need for more active forms of citizen engagement.
One question I have: Is democratic input supposed to be a mere conveyance of information (preferences), or is it a more relational endeavor? If it is relational, I'm not sure my digital twin can accomplish it.
Another thing I've been pondering: Let's say my digital twin attends a city council meeting on my behalf and offers public comment. Three different versions of myself might offer comment, and we'd have to choose which one the LLM would represent: (1) I attend the meeting without preparation and am on my phone paying no attention to the meeting until it's my turn to speak; (2) I don't prepare, but I listen to the rest of the meeting and learn some things that affect what I say during my comment; (3) I read about the topic for a few hours before attending the meeting so that I will have a particularly informed comment to offer. Should digital twins represent what the public actually wants? Or what they would want if they became more enlightened?
This kind of choice is itself one vector of potential manipulation by elected officials: "The public comment simulation determined that red light cameras are supported by the majority" but only because the simulation assumed the public first read about the benefits of the cameras. This kind of political manipulation is also part of what I meant before when I talked about the "attack surface"--not just cybersecurity (though I meant cybersecurity too!).
This is great. Thanks for sharing! I have a couple more thoughts here:
- Your first question about whether democratic input is more about information sharing or is it more relational is spot on. I think it's both, or better put, I think striving for the stronger version (relational) is important. At the same time, it seems to me there are lots of forums where the information sharing would suffice, particularly if the other most likely output is no input or no direct representation in that forum. In this way, I could imagine "digital twins" voting on all sorts of proposals in a whole wide set of forums (not necessarily physical ones) that improves direct representation of a detailed approximation of my preferences. For example, maybe local issues are addressed in a more relational manner, but issues at more of a distance would benefit from simply having more democratic input. Either way, it seems like it's still ideal if the individuals themselves are still actively engaged in the democratic endeavor. I guess I do come down on the side that the informational aspect is more important than the relational endeavor at larger scales of governance.
- Your second point about which "version" of you your "digital twin" should represented is an important one as well. Although, it seems straightforward enough to resolve these or give people a choice on their "digital twin settings." Maybe I want my "digital twin " to be more an enlightened version of me, but maybe you're set in your ways and don't want to change your mind.
- You third point about manipulation is another good point. I do think your specific case is a plausible problem. But, I'm optimistic that one could make this process technically transparent by design.
This is fun. Thanks. :)