May 29, 2020 – ,
We are able to supervise students from FB18 or FB20.
The advertised thesis aims to explore how an open-domain natural language generation (NLG) model such as GPT-2 can be conditioned to generated justification questions for learning texts in a variety of domains. Given e.g. a paragraph of a textbook about computer networks we aim to generate plausible, but not necessary right statements about the given text:
Justify if the following statement is true or false:
VPNs have the disadvantage of being difficult to set up and maintain and they can be compromised by bad actors. (generated statement)
Research in psychology suggests that such statements, if crafted carefully, encourage readers to reflect on the text increasing their comprehension. Also, recent research in NLP has shown that open-domain models can be conditioned by inputting an initial text that they must continue logically. Hence, the thesis aims to explore to what extent we can automatically create such conditioning texts resulting in meaningful justification questions.
Dong, L., Yang, N., Wang, W., Wei, F., Liu, X., Wang, Y., ... & Hon, H. W. (2019). Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems (pp. 13042-13054).
Keywords: automatic question generation; NLG; information extraction