Différences
Ci-dessous, les différences entre deux révisions de la page.
Les deux révisions précédentes Révision précédente Prochaine révision | Révision précédente | ||
equipes:rcln:ancien_wiki:projets:semeval:notes_conference_2015 [2019/04/08 15:28] rosse [[[http://alt.qcri.org/semeval2015/cdrom/pdf/SemEval051.pdf|SemEval-2015 Task 14: Analysis of Clinical Text]]] |
equipes:rcln:ancien_wiki:projets:semeval:notes_conference_2015 [2020/11/23 18:41] (Version actuelle) garciaflores ↷ Page déplacée de equipes:rcln:projets:semeval:notes_conference_2015 à equipes:rcln:ancien_wiki:projets:semeval:notes_conference_2015 |
||
---|---|---|---|
Ligne 125: | Ligne 125: | ||
* # CRF-based span recognition, | * # CRF-based span recognition, | ||
* [[http:// | * [[http:// | ||
- | ==== [[http:// | + | ==== UTH-CCB: The Participation of the SemEval 2015 Challenge – Task 14 ==== |
+ | [[http:// | ||
- Disorder Entity recognition | - Disorder Entity recognition | ||
* Vector space model based, word-embeddings (MIMIC II corpus), CRF, SSVM and Meta Map | * Vector space model based, word-embeddings (MIMIC II corpus), CRF, SSVM and Meta Map | ||
- Disorder slot filling | - Disorder slot filling | ||
* SVM, ngream ffeatures, lexicon features, dependency relation features | * SVM, ngream ffeatures, lexicon features, dependency relation features | ||
- | ==== [[http:// | + | ==== SemEval-2015 Task 15: A CPA dictionary-entry-building task ==== |
+ | [[http:// | ||
- CPA: Corpus pattern analysis. | - CPA: Corpus pattern analysis. | ||
* corpus driven technique for mapping meaning onto words in text | * corpus driven technique for mapping meaning onto words in text | ||
Ligne 142: | Ligne 144: | ||
- ACL-2015 tutorial: Patterns for semantic processing | - ACL-2015 tutorial: Patterns for semantic processing | ||
- | ==== [[http:// | + | ==== BLCUNLP: Corpus Pattern Analysis for Verbs Based on Dependency Chain ==== |
+ | [[http:// | ||
* RAS (rien à signaler) | * RAS (rien à signaler) | ||
- | ==== [[http:// | + | ==== SemEval-2015 Task 9: CLIPEval Implicit Polarity of Events ==== |
+ | [[http:// | ||
* RAS (rien à signaler) | * RAS (rien à signaler) | ||
- | ==== [[http:// | + | ==== SemEval-2015 Task 10: Sentiment Analysis in Twitter ==== |
+ | [[http:// | ||
- Subtasks: | - Subtasks: | ||
* Phrase level sentiment | * Phrase level sentiment | ||
* MEssage level sentiment | * MEssage level sentiment | ||
* Topic level sentiment | * Topic level sentiment | ||
- | [[...]] | + | %%[%%...%%]%% |
* I had to go to the toilet, so I lost most of the presentation, | * I had to go to the toilet, so I lost most of the presentation, | ||
* For subtask B (message polarity, the most popular task of Semeaval) the winner put together four top performer classifiers from previous editions of the task; the second used deep convutional NN and the third used logistic regression with special wieghting for positives and negatives | * For subtask B (message polarity, the most popular task of Semeaval) the winner put together four top performer classifiers from previous editions of the task; the second used deep convutional NN and the third used logistic regression with special wieghting for positives and negatives | ||
Ligne 166: | Ligne 171: | ||
- < | - < | ||
- | ==== [[http:// | + | ==== UNITN: Training Deep Convolutional Neural Network for Twitter Sentiment Classification ==== |
+ | [[http:// | ||
- Very intresting an didactic paper on deep learning to nlp | - Very intresting an didactic paper on deep learning to nlp | ||
- The key to success is the initialization of the NN | - The key to success is the initialization of the NN | ||
Ligne 189: | Ligne 195: | ||
* number of feature maps 300 | * number of feature maps 300 | ||
- Importance of pre-training: | - Importance of pre-training: | ||
- | - | + | - careful weights |
- | ==== [[http:// | + | ==== SemEval-2015 Task 11: Sentiment Analysis of Figurative Language in Twitter ==== |
+ | [[http:// | ||
* Use CrowdFlower for training/ | * Use CrowdFlower for training/ | ||
* 8000 training tweets | * 8000 training tweets | ||
* 4000 testing tweets | * 4000 testing tweets | ||
* output categories: sarcasm, irony, metaphor and others | * output categories: sarcasm, irony, metaphor and others | ||
- | ==== [[http:// | + | ==== CLaC-SentiPipe: |
+ | [[http:// | ||
* Negation and modality | * Negation and modality | ||
* They created a resource for irony called Gezi (and they used a resource called NRC) | * They created a resource for irony called Gezi (and they used a resource called NRC) | ||
Ligne 202: | Ligne 210: | ||
* secondary features: emoticons, highest and lowest sentiment scores, POS counts, named entities | * secondary features: emoticons, highest and lowest sentiment scores, POS counts, named entities | ||
* no bag of words but context aware polarity classes | * no bag of words but context aware polarity classes | ||
- | ==== [[http:// | + | ==== SemEval-2015 Task 12: Aspect Based Sentiment Analysis ==== |
+ | [[http:// | ||
- restaurant and laptops | - restaurant and laptops | ||
- intended to capture contradictory polarity sentences | - intended to capture contradictory polarity sentences | ||
- | ==== [[http:// | + | ==== NLANGP: Supervised Machine Learning System for Aspect Category Classification and Opinion Target Extraction ==== |
+ | [[http:// | ||
* RAS | * RAS | ||
Ligne 215: | Ligne 225: | ||
* Nice talk with Georgeta Bordea, she was the one who built Saffron Expert Finding System, she would be glad to collaborate or do things related to our highly qualified immigration project | * Nice talk with Georgeta Bordea, she was the one who built Saffron Expert Finding System, she would be glad to collaborate or do things related to our highly qualified immigration project | ||
* Had lunch with Greg Grefenstette: | * Had lunch with Greg Grefenstette: | ||
- | ==== [[http:// | + | ==== SemEval-2015 Task 4: TimeLine: Cross-Document Event Ordering (or Newsreader project) ==== |
+ | [[http:// | ||
* TempEval-3 corpus and evaluation methodology proposed by UzZaman (2011) | * TempEval-3 corpus and evaluation methodology proposed by UzZaman (2011) | ||
* Relations represented as timegraph | * Relations represented as timegraph | ||
Ligne 223: | Ligne 234: | ||
* First task focusing on cross-document ordering of events | * First task focusing on cross-document ordering of events | ||
* If we're thinking in AGESS we should participate in this task | * If we're thinking in AGESS we should participate in this task | ||
- | ==== [[http:// | + | ==== SPINOZA_VU: An NLP Pipeline for Cross Document TimeLines ==== |
+ | [[http:// | ||
* based on the NewsREader pipeline system (ENeko Agirre) | * based on the NewsREader pipeline system (ENeko Agirre) | ||
* subtask first addressed at document leven and then aggregated at corpus level | * subtask first addressed at document leven and then aggregated at corpus level | ||
Ligne 236: | Ligne 248: | ||
* TimeLine aggregation module: | * TimeLine aggregation module: | ||
* Very low F values... the most difficult was time ordering (low recall for temporal relations availiable)... "we were missing temporal relations for anchoring" | * Very low F values... the most difficult was time ordering (low recall for temporal relations availiable)... "we were missing temporal relations for anchoring" | ||
- | ==== [[http:// | + | ==== SemEval-2015 Task 5: QA TempEval - Evaluating Temporal Information Understanding with Question Answering ==== |
+ | [[http:// | ||
* QA TempEval: TimeML was originally developped to support research in complex temporal QA. TempEval mainly focused on a more strightforward temporal inforamtion extraction. QA TempEval focuses on an end-user QA task. It is opposed to earlier corpus-based evaluation | * QA TempEval: TimeML was originally developped to support research in complex temporal QA. TempEval mainly focused on a more strightforward temporal inforamtion extraction. QA TempEval focuses on an end-user QA task. It is opposed to earlier corpus-based evaluation | ||
* This evaluation is about the accuracy for answering targeted questions | * This evaluation is about the accuracy for answering targeted questions | ||
Ligne 251: | Ligne 264: | ||
* main finding: using event co-reference may help | * main finding: using event co-reference may help | ||
* //systems are still far from deeply understanding from temporal aspects of NL (recall: 30%)// | * //systems are still far from deeply understanding from temporal aspects of NL (recall: 30%)// | ||
- | ==== [[http:// | + | ==== HLT-FBK: a Complete Temporal Processing System for QA TempEval ==== |
+ | [[http:// | ||
* ML based (SVM in Yamcha) | * ML based (SVM in Yamcha) | ||
* Training: TimeBank and AQUAINT data from TempEval3 task | * Training: TimeBank and AQUAINT data from TempEval3 task | ||
Ligne 261: | Ligne 275: | ||
* All predicates identified by the SRL (semantic role labelling) | * All predicates identified by the SRL (semantic role labelling) | ||
* System described in Paramita Mirza and Sara Tonelli. 2014. Classifying Temporal Relations with Simple Features. | * System described in Paramita Mirza and Sara Tonelli. 2014. Classifying Temporal Relations with Simple Features. | ||
- | ==== [[http:// | + | ==== SemEval-2015 Task 6: Clinical TempEval ==== |
+ | [[http:// | ||
* Clinical events identification (//April 23: the patient did not have any postoperative bleeding//) | * Clinical events identification (//April 23: the patient did not have any postoperative bleeding//) | ||
* Detection of events in relation of the time when the document was written (// | * Detection of events in relation of the time when the document was written (// | ||
Ligne 271: | Ligne 286: | ||
* narrative container relations: begin1, end1, begin2, end2 | * narrative container relations: begin1, end1, begin2, end2 | ||
* ML systems had better recall, rule-based systems had better precision (accuracy) | * ML systems had better recall, rule-based systems had better precision (accuracy) | ||
- | ==== [[http:// | + | ==== BluLab: Temporal Information Extraction for the 2015 Clinical TempEval Challenge ==== |
+ | [[http:// | ||
* Tools: PyConText and Moonstone | * Tools: PyConText and Moonstone | ||
* initiate work on end to end temporal reasoning | * initiate work on end to end temporal reasoning | ||
Ligne 278: | Ligne 294: | ||
* Features: lexical, section, HeidelTime lexicon | * Features: lexical, section, HeidelTime lexicon | ||
* CRF++, cTAKES, lexical, semantic type, context window | * CRF++, cTAKES, lexical, semantic type, context window | ||
- | ==== [[http:// | + | ==== SemEval 2015, Task 7: Diachronic Text Evaluation ==== |
+ | [[http:// | ||
* intresting task: to temporary date text snippets according to the style | * intresting task: to temporary date text snippets according to the style | ||
* linear models to extend pairwise decision | * linear models to extend pairwise decision | ||
Ligne 285: | Ligne 302: | ||
* a crawler to crawl text snippets | * a crawler to crawl text snippets | ||
* intresting for AGESS | * intresting for AGESS | ||
- | ==== [[http:// | + | ==== UCD : Diachronic Text Classification with Character, Word, and Syntactic N-grams ==== |
+ | [[http:// | ||
* stylometric text classification | * stylometric text classification | ||
* word epoch disambiguation (Mihalcea and Nastase, 2012) | * word epoch disambiguation (Mihalcea and Nastase, 2012) | ||
Ligne 301: | Ligne 319: | ||
* character n-grams are highly effective features for diachronic classificattion (but not very satisfying) | * character n-grams are highly effective features for diachronic classificattion (but not very satisfying) | ||
* the prior distribution over date-labels has a significant domain-specific effect | * the prior distribution over date-labels has a significant domain-specific effect | ||
- | ==== [[http:// | + | ==== SemEval-2015 Task 8: SpaceEval ==== |
+ | [[http:// | ||
* Question answering about location of objects, events | * Question answering about location of objects, events | ||
* Text to scoene conversion/ | * Text to scoene conversion/ | ||
Ligne 316: | Ligne 335: | ||
* Standard machine learning models | * Standard machine learning models | ||
* Lexical, syntactical, | * Lexical, syntactical, | ||
- | ==== [[http:// | + | ==== SpRL-CWW: Spatial Relation Classification with Independent Multi-class Models |
+ | [[http:// | ||
* Spatial role labeling | * Spatial role labeling | ||
* I don't understand anything | * I don't understand anything | ||
Ligne 326: | Ligne 346: | ||
* dependency path to spatial signal, lemma, pos, direction from spatial signal. | * dependency path to spatial signal, lemma, pos, direction from spatial signal. | ||
* best features: raw string in a 5 word window, 300-dimension GloVe word vector; POS bigrams for a 5-word window (best feature) | * best features: raw string in a 5 word window, 300-dimension GloVe word vector; POS bigrams for a 5-word window (best feature) | ||
- | ==== [[http:// | + | ==== SemEval-2015 Task 17: Taxonomy Extraction Evaluation (TExEval) ==== |
+ | [[http:// | ||
* Taxonomy extraction: given a list of domain specific term, structure them in a taxonomy | * Taxonomy extraction: given a list of domain specific term, structure them in a taxonomy | ||
* Subtask: term extraction, relation discovery, taxonomy construction | * Subtask: term extraction, relation discovery, taxonomy construction | ||
Ligne 346: | Ligne 367: | ||
* taxonomy construction: | * taxonomy construction: | ||
* no corpus was provided and participants had no gold standard | * no corpus was provided and participants had no gold standard | ||
- | ==== [[http:// | + | ==== INRIASAC: Simple Hypernym Extraction Methods ==== |
+ | [[http:// | ||
* terms: one to nine words | * terms: one to nine words | ||
* substring inclusion: bycycle helmet < helmet (suffix) | * substring inclusion: bycycle helmet < helmet (suffix) | ||
Ligne 358: | Ligne 380: | ||
* counts of term cooccurrence in the same sentence (document frequency of terms) | * counts of term cooccurrence in the same sentence (document frequency of terms) | ||
* method: consider all domain terms B co-occurring in the same Wikipedia sentences, eliminate any candidate B that appears in fewer documents than A, retain N=3 | * method: consider all domain terms B co-occurring in the same Wikipedia sentences, eliminate any candidate B that appears in fewer documents than A, retain N=3 | ||
- | ==== [[http:// | + | ==== SemEval 2015 Task 18: Broad-Coverage Semantic Dependency Parsing ==== |
+ | [[http:// | ||
* RAS | * RAS | ||
==== SemEval-2016 Task Announcements and closing session ==== | ==== SemEval-2016 Task Announcements and closing session ==== |