Talk about AI "alignment" makes me think of a time in the 1950s that a group of companies decided to
create an industry organization to self-govern their work to make sure that they were following the values of the time and avoid government oversight. Of course, I'm talking about the Comics Code Authority (CCA).Fueled by psychiatrist Fredric Wertham's book Seduction of the Innocent and a series of U.S. Congressional hearings, the comic publishers realized they needed to police themselves and formed a trade group, the Comics Magazine Association of America (CMAA). The CMAA created the Comics Code Authority (CCA) in 1954 to enforce a code of content guidelines that comic book publishers would adhere to. The Comics Code prohibited explicit violence, sexual content, and other adult themes in comic books, as well as a promoting a "positive portrayal" of authority figures and institutions. The CCA seal, which was a small stamp indicating that a comic book had been reviewed and approved by the organization, became a requirement for distribution by most newsstands and retailers pushing many publishers to follow the code.
I started reading comic books in the 1970s with the code in full swing. It was not a golden time for comic books with mostly bland, straightforward stories, and I gave it up as I went into high school. In college in the '80s, a friend brought me back into comics with some series, like Watchmen, having given up the code and the seal. I started reading comics voraciously, so much that I had to go cold turkey in grad school so I could focus on research. The code itself was abandoned in 2011 after even Archie Comics gave up using the seal.
There's not a direct parallel between comic book writers and large language models, but the principle is the same. If you try to enforce a collection of values, you will get blander, less interesting output. I'm not saying that all alignment is a bad idea, but that you need to realize you will lose something when you do.