Wei Dai

Wei Dai is a computer engineer and former cypherpunk who is known for being the original author of Crypto++ and for proposing b-money, a precursor to Bitcoin.

Quotes

 * I do have some early role models. I recall wanting to be a real-life version of the fictional "Sandor Arbitration Intelligence at the Zoo" (from Vernor Vinge's novel A Fire Upon the Deep) who in the story is known for consistently writing the clearest and most insightful posts on the Net. And then there was Hal Finney who probably came closest to an actual real-life version of Sandor at the Zoo, and Tim May who besides inspiring me with his vision of cryptoanarchy was also a role model for doing early retirement from the tech industry and working on his own interests/causes.
 * In response to the question "Do you have any role models?", March 2014


 * Here's my experience. I applied to just MIT and my state university (University of Washington). I got on MIT's waiting list but was ultimately not accepted, so went to UW. I would certainly have gone to MIT had I been accepted, but my thinking now is that if I did that, I would not have had enough free time in college to write Crypto++ and think about anonymous protocols, Tegmark's multiverse, anthropic reasoning, etc., and these spare-time efforts have probably done more for my "career" than the MIT name or what I might have learned there.
 * On his university experience, in a discussion thread on LessWrong, March 2011


 * [ By the way ], since you are advising high school students and undergrads, I suggest that you mention to them that they can start being independent researchers before they graduate from college. For example I came up with my b-money idea (a precursor to Bitcoin) as an undergrad, and was also already thinking about some of the questions that would eventually lead to [ updateless decision theory ] .
 * In a discussion thread on LessWrong, March 2014


 * I typically find myself wanting to verify every single fact or idea that I hadn't heard of before, and say either "hold on, I need to think about that for a few minutes" or "let me check that on Google/Wikipedia". In actual conversation I'd suppress this because I suspect the other person will quickly find it extremely annoying. I just think to myself "I'll try to remember what he's saying and check it out later", but of course I don't have such a good memory.
 * In a discussion thread on LessWrong, June 2010


 * Here's my own horror story with academic publishing. I was an intern at an industry research lab, and came up with a relatively simple improvement to a widely used cryptographic primitive. I spent a month or two writing it up (along with relevant security arguments) as well as I could using academic language and conventions, etc., with the help of a mentor who worked there and who used to be a professor. Submitted to a top crypto conference and weeks later got back a rejection with comments indicating that all of the reviewers completely failed to understand the main idea. The comments were so short that I had no way to tell how to improve the paper and just got the impression that the reviewers weren't interested in the idea and made little effort to try to understand it. My mentor acted totally unsurprised and just said something like, "let's talk about where to submit it next." That's the end of the story because I decide if that's how academia works I wanted to have nothing to do with it when there's, from my perspective, an obviously better way to do things, i.e., writing up the idea informally, posting it to a mailing list and getting immediate useful feedback/discussions from people who actually understand and are interested in the idea.
 * On his experiences with academia, in a discussion thread on LessWrong, August 2017


 * In other words, when I say "what's the evidence for that?", it's not that I don't trust your rationality (although of course I don't trust your rationality either), but I just can't deduce what evidence you must have observed from your probability declaration alone even if you were fully rational.
 * In a discussion thread on LessWrong, March 2014


 * I think status is in fact a significant motivation even for me, and even the more "pure" motivations like intellectual curiosity can in some sense be traced back to status. It seems unlikely that [ updateless decision theory ] would have been developed without the existence of forums like extropians, everything-list, and LW, for reasons of both motivation and feedback/collaboration.
 * In response to someone suggesting that status is not a motivator for Dai, September 2017


 * Part of it, which perhaps you and most other observers are not aware, is that I have enough passive income, and enough dispassion for conventional status signaling, that my marginal utility of money is pretty low compared to my disutility for doing busywork. To put it in perspective, I quit my last regular job in 2002, and stopped doing consulting for that company as well (at $100/hour) a year later when they merged with Microsoft and told me I had to do a bunch of paperwork and be hired by Microsoft's "independent consulting company" in order to continue.
 * In a discussion thread on LessWrong, July 2014


 * I don't like playing politics, I don't like having bosses and being told what to do, I don't like competition, I have no desire to manage other people, so I've instinctively avoided or quickly left any places that were even remotely maze-like.
 * In a discussion thread on LessWrong, January 2020


 * One solution [ to the problem that high status might cause stupidity ] that might work (and I think has worked for me, although I didn't consciously choose it) is to periodically start over. Once you've achieved recognition in some area, and no longer have as much interest in it as you used to, go into a different community focused on a different topic, and start over from a low-status (or at least not very high status) position.
 * In a discussion thread on LessWrong, January 2010


 * Does anyone not have any problems with taking ideas seriously? I think I'm in this category because ideas like cryonics, the Singularity, [ unfriendly artificial intelligence ], and Tegmark's mathematical universe were all immediately obvious to me as ideas to take seriously, and I did so without much conscious effort or deliberation.
 * In a discussion thread on LessWrong, August 2010


 * I think a highly rational person would have high moral uncertainty at this point and not necessarily be described as "altruistic".
 * In a discussion thread on LessWrong, June 2012


 * Yes, but I tend not to advertise too much that people should be less certain about their altruism, since it's hard to see how that could be good for me regardless of what my values are or ought to be. I make an exception of this for people who might be in a position to build [ a friendly artificial intelligence ], since if they're too confident about altruism then they're likely to be too confident about many other philosophical problems, but even then I don't stress it too much.
 * In response to the question "Do you think that most people should be very uncertain about their values, e.g. altruism?", March 2014


 * This [the feeling that writing reviews of posts is work] is partly why I haven't done any reviews, despite feeling a vague moral obligation to do so. Another reason is that I wasn't super engaged with [LessWrong] throughout most of 2018 and few of the nominated posts jumped out at me (as something that I have a strong opinion about) from a skim of the titles, and the ones that did jump out at me I think I already commented on back when they were first posted and don't feel motivated to review them now. Maybe that's because I don't like to pass judgment (I don't think I've written a review for anything before) and when I first commented it was in the spirit of "here are some tentative thoughts I'm bringing up for discussion".
 * In a discussion thread on LessWrong, January 2020

Quotes about Dai

 * Yes, you're a freak and nobody but you and a few other freaks can ever get any useful thinking done and didn't we sort of cover this territory already?
 * Eliezer Yudkowsky, in a discussion thread on LessWrong, May 2011


 * Most people do not spontaneously try to solve the [ friendly artificial intelligence (FAI) ] problem. If they're spontaneously doing something, they try to solve the AI problem. If we're talking about sort of 'who's made interesting progress on FAI problems without being a Singularity Institute Eliezer supervised person,' then I would have to say: Wei Dai.
 * Eliezer Yudkowsky, in a question-and-answer session in response to the question "Who was the most interesting would-be FAI solver you encountered?", January 2010


 * I think you're a much better writer & thinker than me
 * Gwern Branwen, in a discussion thread on LessWrong, May 2011


 * I had ambitious plans for a much longer post, but I don't feel like writing this one anymore, so I'm going to truncate it here and publish it. The main upshot was probably going to be something about how Wei Dai continues to be and have been the single best contemporary thinker.
 * George Koleszarik (Grognor), in "Cooperative Epistemology", May 2016


 * Wei Dai made a coronavirus trade now up 700%, remarking "At least for me this puts a final nail in the coffin of EMH." Wei was already on my shortlist of EMH challengers.  This genuinely isn't looking great for EMH.
 * Eliezer Yudkowsky, on Twitter, February 27, 2020