Glad you loved it. I try to keep my writing as approachable as possible- whether it's references, memes, or removing jargon when I can. I was told by some serious tech people that it made my writing seem unprofessional and that 'serious professionals would not respect' my writing. To a degree that has been true, but by and large, people have told me that it helps them engage with the research better. So I do my best.
The real question is- who do you think wins the UCL now?
Of course, in a newsletter like this I think that keeping things approachable and entertaining can only help. It is not like you are trying to publish a research paper every week.
About UCL... I'd say City has the best team, but my Madrid is always there 😏
Really interesting approach. The idea that you can quantify moral consequences and thus perform some sort of algebraic computation to select the most moral action, isn't that in itself a rather utilitarian point of view? Regardless of how you assign weights to different actions, just the notion that the best action is the one that produces the greatest positive impact minus the least negative impact already hinges on this utilitarian framework IMO. I'm not saying this is necessarily wrong but many philosophers reject strict utilitarianism because it lurs you into some really weird conclusions in the extreme.
Great catch. From a philosophy standpoint- you are completely correct. But there's a few reasons why this is different in our case-
1) We are trying to build AI that avoids negative actions- first and foremost. Not develop a fully ethical system. So we can afford to simplify in our setup.
2) One huge pitfall of utilitarian philosophy is the inability to ascribe accurate 'utility' to actions- Maybe Punching someone does provide me with me more happiness than the grief caused, but we can't measure that. However, in our case, we are assigning moral valences to different actions, so that part is taken care off. This makes life significantly easier.
Ultimately, we aren't using AI to solve human ethics (the trolley problem statement was tongue in cheeck). We are trying to get AI to be less funky and not behave in ways that we don't want. This is a huge difference.
Never thought it would be possible to write the name Sergio Ramos on a serious breakdown of an artificial intelligence research paper! Love this!
Glad you loved it. I try to keep my writing as approachable as possible- whether it's references, memes, or removing jargon when I can. I was told by some serious tech people that it made my writing seem unprofessional and that 'serious professionals would not respect' my writing. To a degree that has been true, but by and large, people have told me that it helps them engage with the research better. So I do my best.
The real question is- who do you think wins the UCL now?
Of course, in a newsletter like this I think that keeping things approachable and entertaining can only help. It is not like you are trying to publish a research paper every week.
About UCL... I'd say City has the best team, but my Madrid is always there 😏
Really interesting approach. The idea that you can quantify moral consequences and thus perform some sort of algebraic computation to select the most moral action, isn't that in itself a rather utilitarian point of view? Regardless of how you assign weights to different actions, just the notion that the best action is the one that produces the greatest positive impact minus the least negative impact already hinges on this utilitarian framework IMO. I'm not saying this is necessarily wrong but many philosophers reject strict utilitarianism because it lurs you into some really weird conclusions in the extreme.
Great catch. From a philosophy standpoint- you are completely correct. But there's a few reasons why this is different in our case-
1) We are trying to build AI that avoids negative actions- first and foremost. Not develop a fully ethical system. So we can afford to simplify in our setup.
2) One huge pitfall of utilitarian philosophy is the inability to ascribe accurate 'utility' to actions- Maybe Punching someone does provide me with me more happiness than the grief caused, but we can't measure that. However, in our case, we are assigning moral valences to different actions, so that part is taken care off. This makes life significantly easier.
Ultimately, we aren't using AI to solve human ethics (the trolley problem statement was tongue in cheeck). We are trying to get AI to be less funky and not behave in ways that we don't want. This is a huge difference.