RT @Shuyi817: It seems like I should do any within-task or in-domain further pre-training for my model because they significantly reduce er…
69 followers
3 followers
It seems like I should do any within-task or in-domain further pre-training for my model because they significantly reduce error rates, approximately 16% to 18%. https://t.co/9UaCZUZp1Q
6,784 followers
i once saw a researcher post that people shouldn't pay to read the articles from the journals bc they don't get paid off it it anyway, and people can just email them to get a copy of the paper. is that true? because i need to read this paper real bad: http
14,168 followers
RT @arxiv_cscl: CJRC: A Reliable Human-Annotated Benchmark DataSet for Chinese Judicial Reading Comprehension https://t.co/2QVOTbp7g6