Consiglio Nazionale delle Ricerche

Tipo di prodottoContributo in atti di convegno
TitoloEvaluation of natural language tools for italian: EVALITA 2007
Anno di pubblicazione2008
Formato-
Autore/iMagnini B.; Cappelli A.; Tamburini F.; Bosco C.; Mazzei A.; Lombardo V.; Bertagna F.; Calzolari N.; Toral A.; Bartalesi Lenzi V.; Sprugnoli R.; Speranza M.
Affiliazioni autoriFBK-ricerca scientifica e tecnologica, Povo, Trento, Italy; CNR-ISTI, Pisa - CELCT, Povo, Trento, Italy; Dipartimento di Studi Linguistici e Orientali, Università di Bologna, Bologna, Italy; Dipartimento di Informatica, Università di Torino, Turin, Italy; Dipartimento di Informatica, Università di Torino, Turin, Italy; Dipartimento di Informatica, Università di Torino, Turin, Italy; CNR-ILC, Pisa, Italy; CNR-ILC, Pisa, Italy; CNR-ILC, Pisa, Italy; CELCT, Povo, Trento, Italy; CELCT, Povo, Trento, Italy; FBK-ricerca scientifica e tecnologica, Povo, Trento, Italy
Autori CNR e affiliazioni
  • FRANCESCA BERTAGNA
  • AMEDEO CAPPELLI
Lingua/e
  • inglese
AbstractEVALITA 2007, the first edition of the initiative devoted to the evaluation of Natural Language Processing tools for Italian, provided a shared framework where participants' systems had the possibility to be evaluated on five different tasks, namely Part of Speech Tagging (organised by the University of Bologna), Parsing (organised by the University of Torino), Word Sense Disambiguation (organised by CNR-ILC, Pisa), Temporal Expression Recognition and Normalization (organised by CELCT, Trento), and Named Entity Recognition (organised by FBK, Trento). We believe that the diffusion of shared tasks and shared evaluation practices is a crucial step towards the development of resources and tools for Natural Language Processing. Experiences of this kind, in fact, are a valuable contribution to the validation of existing models and data, allowing for consistent comparisons among approaches and among representation schemes. The good response obtained by EVALITA, both in the number of participants and in the quality of results, showed that pursuing such goals is feasible not only for English, but also for other languages.
Lingua abstractinglese
Altro abstract-
Lingua altro abstract-
Pagine da2536
Pagine a2543
Pagine totali8
Rivista-
Numero volume della rivista-
Serie/Collana-
Titolo del volumeProceeding LREC 2008
Numero volume della serie/collana-
Curatore/i del volumeNicoletta calzolari, Khalid Choukri, Bente Maegard
ISBN2-9517408-4-0
DOI-
Editore
  • European Language Resources Association ELRA, Paris (Francia)
Verificato da refereeSì: Internazionale
Stato della pubblicazionePublished version
Indicizzazione (in banche dati controllate)-
Parole chiaveNatural language evaluation, Standards for LRs, Evaluation methodologies
Link (URL, URI)http://www.lrec-conf.org/proceedings/lrec2008/
Titolo convegno/congressoProceedings of LREC 2008
Luogo convegno/congressoMarrakech, Morocco
Data/e convegno/congresso28-30 May 2008
RilevanzaInternazionale
RelazioneContributo
Titolo parallelo-
Note/Altre informazioni-
Strutture CNR
  • ILC — Istituto di linguistica computazionale "Antonio Zampolli"
  • ISTI — Istituto di scienza e tecnologie dell'informazione "Alessandro Faedo"
Moduli/Attività/Sottoprogetti CNR
  • ICT.P08.009.003 : Knowledge Discovery and Data Mining
Progetti Europei-
Allegati
Evaluation of natural language tools for italian: EVALITA 2007
Descrizione: Codice PuMa: cnr.isti/2008-A2-125
Tipo documento: application/pdf

Dati storici
I dati storici non sono modificabili, sono stati ereditati da altri sistemi (es. Gestione Istituti, PUMA, ...) e hanno solo valore storico.
Area disciplinareInformation Technology & Communications Systems
Area valutazione CIVRScienze e tecnologie per una società dell'informazione e della comunicazione
NoteIn: LREC 2008 - Proceedings of LREC 2008 (Marrakech, 26th May - 1st June 2008). Proceedings, pp. 2536 - 2543. Nicoletta calzolari, Khalid Choukri, Bente Maegard (eds.). ELRA - European Language Resources Association, 2008.
Descrizione sintetica del prodottoEVALITA 2007, the first edition of the initiative devoted to the evaluation of Natural Language Processing tools for Italian, provided a shared framework where participants' systems had the possibility to be evaluated on five different tasks, namely Part of Speech Tagging (organised by the University of Bologna), Parsing (organised by the University of Torino), Word Sense Disambiguation (organised by CNR-ILC, Pisa), Temporal Expression Recognition and Normalization (organised by CELCT, Trento), and Named Entity Recognition (organised by FBK, Trento). We believe that the diffusion of shared tasks and shared evaluation practices is a crucial step towards the development of resources and tools for Natural Language Processing. Experiences of this kind, in fact, are a valuable contribution to the validation of existing models and data, allowing for consistent comparisons among approaches and among representation schemes. The good response obtained by EVALITA, both in the number of participants and in the quality of results, showed that pursuing such goals is feasible not only for English, but also for other languages.