SlideShare una empresa de Scribd logo
1 de 14
Descargar para leer sin conexión
Requirement Analysis                  Version 0.4


                                                      by the Stat Team

                                                        Mehrbod Sharifi
                                                             Jing Yang




               The Stat Project, guided by

          Professor Eric Nyberg and Anthony Tomasic




                      Feb. 25, 2009
Chapter 1

Introduction to Stat

In this chapter, we give an brief introduction to the Stat project to audience reading this document.
We explain the background, the motivation, the scope, and the stakeholders of this project so that
audience can understand why we are doing so, what we are going to do, and who may be interested
in our project.


1.1    Overview
Stat is an open source machine learning framework in Java for text analysis with focus on semi-
supervised learning algorithms. Its main goal is to facilitate common textual data analysis tasks
for researcher and engineers, so that they can get their works done straightforwardly and efficiently.

    Applying machine learning approaches to extract information and uncover patterns from tex-
tual data has become extremely popular in recent years. Accordingly, many software have been
developed to enable people to utilize machine learning for text analytics and automate such pro-
cess. Users, however, find many of these existing software difficult to use, even if they just want
to carry out a simple experiment; they have to spend much time learning those software and may
finally find out they still need to write their own programs to preprocess data to get their target
software running.

    We notice this situation and observe that many of these can be simplified. A new software
framework should be developed to ease the process of doing text analytics; we believe researchers
or engineering using our framework for textual data analysis would feel the process convenient,
conformable, and probably, enjoyable.


1.2    Purpose
Existing software with regard to using machine learning for linguistic analysis have tremendously
helped researchers and engineers make new discoveries based on textual data, which is unarguably
one of the most form of data in the real world.

    As a result, many more researchers, engineers, and possibly students are increasingly interested
in using machine learning approaches in their text analytics. However, the bar for entering this
area is not low. Those people, some of which even being experienced users, find existing software
packages are not generally easy to learn and convenient to use.



                                                 1
For example, although Weka has a comprehensive suite of machine learning algorithms, it is
not designed for text analysis, lacking of naturally supported capabilities for linguistic concepts
representation and processing. MinorThird, on the other hand, though designed specifically as a
package for text analysis, turns out to be rather complicated and difficult to learn. It also does not
support semi-supervised and unsupervised learning, which are becoming increasingly important
machine learning approaches.

   Another problem for many existing packages is that they often adopt their own specific input
and output format. Real-world textual data, however, are generally in other formats that are not
readily understood by those packages. Researchers and engineers who want to make use of those
packages often find themselves spending much time seeking or writing ad hoc format conversion
code. These ad hoc code, which could have been reusable, are often written over and over again
by different users.

    Researchers and engineers, when presented common text analysis tasks, usually want a text-
specific, lightweight, reusable, understandable, and easy-to-learn package that help them get their
works done efficiently and straightforwardly. Stat is designed to meet their requirements. Moti-
vated by the needs of users who want to simplify their work and experiment related to textual data
learning, we initiate the Stat project, dedicating to provide them suitable toolkits to facilitate
their analytics task on textual data.

      In a nutshell, Stat is an open source framework aimed at providing researchers and en-
      gineers with a integrated set of simplified, reusable, and convenient toolkits for textual
      data analysis. Based on this framework, researchers can carry out their machine learning
      experiments on textual data conveniently and comfortably, and engineers can build their
      own small applications for text analytics straightforwardly and efficiently.

    From the comprehensiveness of features point of view, this framework may not be the most
suitable one compared to other existing packages. However, we should dedicated to make all the
code we write well-designed, efficient, and reliable. need change.


1.3     Scope
This project involves developing a simplified and reusable framework (a collection of foundation
classes) in Java that provides basic and common capabilities for people to easily perform machine
learning analysis on various kind of textual data.

   Add what aspects specifically we will going to do here.




                                                 2
1.4    Stakeholders
Below is the list of stakeholder and how this project will affect them:

   • Researchers, particularly in language technology but also in other fields, would be able
     to save time by focusing on their experiments instead of dealing with various input/output
     format which is routinely necessary in text processing. They can also easily switch between
     various tools available and even contribute to STAT so that others can save time by using
     their adaptors and algorithms.

   • Software engineers, who are not familiar with the machine learning can start using the
     package in their program with a very short learning phase. STAT can help them develop clear
     concepts of machine learning quickly. They can build their applications using functionality
     provided STAT easily and achieve high level performance.

   • Developers of learning package, can provide plug-ins for STAT to allow ease of integration
     of their package. They can also delegate some of the interoperability needs through this
     program (some of which may be more time consuming to be addressed within their own
     package).

   • Beginners to text processing and mining, who want fundamental and easy to learn
     capabilities involving discovering patterns from text. They will be benefited from this project
     by saving their time, facilitating their learning process, and sparking their interests to the
     area of language technology.




                                                3
Chapter 2

Survey Analysis

This project was faced with many challenges from the beginning. There are many question, some
of subjective nature, that really needs to be addresses by our target audience. For this reason, we
designed a survey to obtain a better understanding and provide a more suitable solution to this
problem. In this chapter, we explain the process of designing the survey, collecting information
and some analysis of the collected data.


2.1    Designing the Survey
The primary goals of doing a survey was the following:

   • Understanding the potential users of the package: their programming habit, problem solving
     strategies, experience in various area and tools, etc.

   • Setting priority for which criteria to focus on for our design and implementation

   The survey needed to be short and question to be very specific to get better responses. The
maximum number of question was set at 10 questions. Several draft of the questions was reviewing
within the STAT group and the software engineering class students and instructors several times
until finalize. We also obtained and incorporate some advices from other departments. The final
survey was designed on the SurveyMonkey.com.


2.2    Distribution
The target users of STAT are two main groups with different needs: researchers and industry
programmer. The survey contains questions to distinguish there two group but the final framework
should address the needs from both groups. After conducting a test run with this the STAT group
and the class, we sent the survey out to the Language Technology Institute student mailing list
(representing researchers) and also to student in iLab (Prof, Ramayya Krishnan, Heinz School of
Business) representing industry programmers.


2.3    Analysis of Results
As of 2/25/09, we have received 23 responses and they are individually reviewed by STAT members
and also in aggregate. Below we summarized the finding of the survey result and some charts:

   • While many different programming language are used (Python, R, C++) but over 90


                                                4
• Users don’t seem to distinguish much between industry and research applications and this is
  perhaps more research for the different to be transparent.

• Most users are not familiar with Operation Research but everyone is somewhat familiar with
  Machine Learning (if not specifically text classification or data mining).

• Data type expectedly were mostly textual (plain, XML, HTML, etc. as opposed to Excel,
  though it was mentioned) and sources were files, databases and web.

• Over 50

• Easy of API use, Performance and Extensibility were the top three choice in design but in
  addition to those in textual descriptions user pointed out mostly problems with input and
  output formats.

Charts to be added here...




                                            5
Chapter 3

Analysis of Related Packages

In this chapter, we analyze a few main competitors of our projects. We focus on two academic
toolkits – Weka and MinorThird. We comment on their strengths and explore their limitations, and
discuss why and how we can do better than these competitors.


3.1        Weka
Weka is a comprehensive collection of machine learning algorithms for solving data mining problems
in Java and open sourced under the GPL.

3.1.1       Strengths of Weka
Weka is a very popular software for machine learning, due to the its main strengths:

       • Provide comprehensive machine learning algorithms. Weka supports most current
         machine learning approaches for classification, clustering, regression, and association rules.
       • Cover most aspects for performing a full data mining process. In addition to learn-
         ing, Weka supports common data preprocessing methods, feature selection, and visualization.
       • Freely available. Weka is open source released under GNU General Public License.
       • Cross-platform. Weka is cross-platform fully implemented in Java.

Because of its supports of comprehensive machine learning algorithm, Weka is often used for
analytics in many form of data, including textual data.

3.1.2       Limitations of using Weka for text analysis
However, Weka is not designed specifically for textual data analysis. The most critical drawback
of using Weka for processing text is that Weka does not provide “built-in” constructs for natural
representation of linguistics concepts1 . Users interested in using Weka for text analysis often find
themselves need to write some ad-hoc programs for text preprocessing and conversion to Weka
representation.

       • Not good at understanding various text format. Weka is good at understanding its
         standard .arff format, which is however not a convenient way of representation text. Users
         have to worry about how can they convert textual data in various original format such as
   1
     Though there are classes in Weka supporting basic natural language processing, they are viewed as auxiliary
utilities. They make performing basic textual data processing using Weka possible, but not conveniently and straight-
forwardly


                                                         6
raw plain text, XML, HTML, CSV, Excel, PDF, MS Word, Open Office document, etc. to
     be understandable by Weka. As a result, they need to spend time seeking or writing external
     tools to complete this task before performing their actual analysis.
   • Unnecessary data type conversion. Weka is superior in processing nominal (aka, categor-
     ical) and numerical type attributes, but not string type. In Weka, non-numerical attributes
     are by default imported as nominal attributes, which usually is not a desirable type for text
     (imagine treating different chunks of text as different values of a categorical attribute). One
     have to explicitly use filters to do a conversion, which could have been done automatically if
     it knows you are importing text.
   • Lack of specialized supported for linguistics preprocessing. Linguistics preprocessing
     is a very important aspect of textual data analysis but not a concern of Weka. Weka does
     not (at least, not dedicated to) take care this issue very seriously for users. Weka has a
     StringToWordVector class that performs all-in-one basic linguistics preprocessing, including
     tokenization, stemming, stopword removal, tf-idf transformation, etc. However, it is less
     flexible and lack of other techniques (such as part-of-speech tagging and n-gram processing)
     for users who want fined grain and advanced linguistics controls.
   • Unnatural representation of textual data learning concepts. Weka is designed for
     general purpose machine learning tasks so have to protect too many variations. As a results,
     domain concepts in Weka are abstract and high-level, package hierarchy is deep, and the
     number of classes explodes. For example, we have to use Instance rather than Document and
     Instances rather than Corpus. Concepts in Weka such as Attribute is obscure in meaning
     for text processing. First adding many Attribute to a cryptic FastVector which then passed
     to a Instances in order to construct a dataset appears very awkward to users processing
     text. Categorize filters first according to attribute/instance then supervised /unsupervised
     make non-expert users feel confusing and hard to find their right filters. Many users may feel
     unconformable programmatically using Weka to carry out their experiments related to text.

    In summary, for users who want enjoyable experience at performing text analysis, they need
built-in capabilities to naturally support representing and processing text. They need specialized
and convenient tools that can help them finish most common text analysis tasks straightforwardly
and efficiently. This cannot be done by Weka due to its general-purpose nature, despite its com-
prehensive tools.

3.1.3   Detail design defects of Weka from the perspective of text analysis




                                                7
Figure 3.1: Partial domain model for Weka for basic text analysis




                               8
Chapter 4

Requirements specifications

Here we first explain in detail the major features of our framework.

   • Simplified. APIs are clear, consistent, and straightforward. Users with reasonable Java
     programming knowledge can learn our package without much efforts, understand its logical
     flow quickly, be able to get started within a small amount of time, and finish the most common
     tasks with a few lines of code. Since our framework is not designed for general purposes and
     for including comprehensive features, there are space for us to simplify the APIs to optimize
     for those most typical and frequent operations.

   • Reusable. Built-in modular supports are provided the core routines across various phases in
     text analysis, including text format transformation, linguistic processing, machine learning,
     and experimental evaluation. Additional functionalities can be extended on top of the core
     framework easily and user-defined specifications are pluggable. Existing code can be used
     cross environment and interoperate with external related packages, such as Weka, Minor-
     Third, and OpenNLP. (I use reusable instead of extendable because it cover a higher level of
     concept we might also need and able to follow, what’s your idea? )

   • Any other?


4.1    Functional Requirements
In this section, we define most common use cases of our framework and address them in the degree
of detail of casual use case. The “functional requirements” of this project are that the users can
use libraries provided by our framework to complete these use cases more easily and comfortably
than not use.

Actors
Since our framework assumes that all users of interests are programming using our APIs, there is
only one role of human actor, namely the programmer. This human actor is always the primary
actor. There are some possible secondary and system actors, namely the external packages our
framework integrates, depending on what specific use cases the primary actor is performing.




                                                9
Fully-dressed Use Cases

               Use Case UC1: Document Classification Experiment

Scope: Text analysis application using STAT framework

Level: User goal

Primary Actor: Researcher

Stakeholder and Interests:

       • Researcher: Want to test and evaluate a classification algorithm (supervised, semi-
         supervised or unsupervised) by applying it on a (probably well-known) corpus; the task
         needs to be done efficiently with easy and straightforward coding

Preconditions:

       • STAT framework is correctly installed and configured
       • The corpus is placed on a source readable by STAT framework

Postconditions:

       • A model is trained and test documents in the corpus are classified. Evaluation results
         are displayed

Main Success Scenario:

       1. Researcher imports the corpus from its source into memory. Specifically, the system
          reads data from the source, parses the raw format, extracts information according to
          the schema, and constructs an in-memory object to store the corpus
       2. Researcher performs preprocessing on the corpus. Specifically, for each document, the
          researcher tokenizes the text, removes the stopwords, performs stemming on the tokens,
          performs filtering, and/or other potential preprocessing on body text and meta data
       3. Researcher converts the corpus into the feature vectors needed for machine learning.
          The feature vectors are created by analyzing the documents in the corpus, deriving or
          filtering features, adding or removing documents, sampling documents, handling missing
          entries, normalizing features, selecting features, and/or other potential processing
       4. Researcher splits the processed corpus into training and testing set
       5. Researcher chooses a machine learning algorithm, set its parameters, and uses it to train
          a model from the training set
       6. Researcher classifies the documents in the test set based on the model trained
       7. Researcher evaluates the classification based on classification results obtained on the
          test set and its true labels. Classification is evaluated mainly on classification accuracy
          and classification time or if it is unsupervised, on other unsupervised metrics such as
          Adjusted Rand Index.
       8. Researcher displays the final evaluation result




                                               10
Use Case UC1: Document Classification Experiment (cont.)

Extensions:
    1a. The framework is unable to find the specified source.
       1. Throw source not found exception
    1b. Researcher loads a previously saved corpus in native format from a file on the disk directly
    to memory object, thus researcher does not handle source, format, or schema explicitly.
       1a. File not found:
          1.Throw file not found exception
       1b. Malformed native format:
          1.Throw malformed native format exception
    4a. Researcher specify a parameter k larger than the number of document or smaller than 1
       1. Throw invalid argument exception
    1-3, 5a. Researcher saves the in-memory objects of different level of processed corpus rep-
    resentation to disk in native format which can be loaded back lately, after finishing each
    step.
    1-3, 5b. Research exports the in-memory objects of different processed corpus representation
    to disk in external formats (e.g., weka arff, csv) which can be processed by external software.
    6a. Researcher saves the in-memory model object to disk, which can be loaded back lately.
    6b. Researcher loads a previously saved model in native format from a file on the disk directly
    to memory object.
       1a. File not found:
          1. Throw file not found exception
       1b. Malformed native format:
          1.Throw malformed native format exception
    4-8b. To perform k-fold cross validation, the corpus is split to k parts in step 4, and steps
    5-8 are repeated k-times by switching each split a testing split and the rest as training.
    Researcher combines the evaluations on different test sets obtained in the previous steps and
    forms a final classification evaluations
    6c. Unsupported learning parameters (the learning algorithm cannot handle the combination
    of parameters the researcher specifies)
       1. Throw unsupported learning parameters exception
    6d. Unsupported learning capability (the learning algorithm cannot handle the format and
    data in training set, potentially caused by unsupported feature type, class type, missing
    values, etc).
       1. Identify exception cause(s)
       2. Throw corresponding exception(s)
    8a. Incompatible between test set and classification (potentially caused by difference in
    schema between training set and test set)
       1. Throw incompatible evaluation exception
                                              11
Use Case UC1: Document Classification Experiment (cont.)

 10a. The researcher customizes the display instead of using the default display format.
        1.The researcher obtains specific fields of the evaluations via interfaces provided
        2.The researcher constructs a customized format using the fields he/she extracts
        3.The researcher display it customized format and/or write to a destination

Special Requirements:

        • Pluggable preprocessors in step 2-3
        • Pluggable learning algorithm in step 6
        • Learning algorithm should be scalable to deal with large corpus
        • Researcher should be able to visualize results after various steps to trace the state of
          different objects (e.g., preprocessed corpus, models, classifications, evaluations)
        • Researcher should be able to customize the visualization output

Open Issues:

        • How to address the variations issues in reading different sources
        • How to (in what form) let research specify parameters for different learning algorithms
        • What specifically need to be able to export, persist, and visualize?
        • How to implement the corpus splitting in an efficient way (dont create extra objects)
        • How to deal with performance issues of storing large corpora in the memory
        • How to deal with internal representation of the dataset in efficient data structure



4.2   Non-functional Requirements
  • Open source. It should be made available for public collaboration, allowing users to use,
    change, improve, and redistribute the software.
  • Portability. It should be consistently installed, configured, and run independent to different
    platforms, given its design and implementation on Java runtime environment.
  • Documentation. Its code should be readable, self-explained, and documented clearly and
    unambiguously for critical or tricky part. It should include an introduction guide for users
    to get started, and preferably, provides sample dataset, tutorial, and demos for user to run
    examples out of the box.
  • Performance. It should be able to response to user within reasonable amount of time given
    a limited amount of data (unclear, need specify). Preferably, it can estimate the running
    time needed to perform a task and notify user before user actually execute the task (is this
    the responsibility for framework designers? )
  • Dependency. It is actually a issue. The package integrates other external packages and has
    many dependency. How to resolve this issue? How do we distribute our package?


                                                12
Bibliography

[1] Reference 1

[2] Reference 2




                  13

Más contenido relacionado

La actualidad más candente

patent search paradigm (ieee)
patent search paradigm (ieee)patent search paradigm (ieee)
patent search paradigm (ieee)
Prateek Jaiswal
 
295B_Report_Sentiment_analysis
295B_Report_Sentiment_analysis295B_Report_Sentiment_analysis
295B_Report_Sentiment_analysis
Zahid Azam
 
Presentation: Tool Support for Essential Use Cases to Better Capture Software...
Presentation: Tool Support for Essential Use Cases to Better Capture Software...Presentation: Tool Support for Essential Use Cases to Better Capture Software...
Presentation: Tool Support for Essential Use Cases to Better Capture Software...
Naelah AlAgeel
 
project sentiment analysis
project sentiment analysisproject sentiment analysis
project sentiment analysis
sneha penmetsa
 
Robotics-Based Learning in the Context of Computer Programming
Robotics-Based Learning in the Context of Computer ProgrammingRobotics-Based Learning in the Context of Computer Programming
Robotics-Based Learning in the Context of Computer Programming
Jacob Storer
 

La actualidad más candente (18)

patent search paradigm (ieee)
patent search paradigm (ieee)patent search paradigm (ieee)
patent search paradigm (ieee)
 
A Survey on Using Artificial Intelligence Techniques in the Software Developm...
A Survey on Using Artificial Intelligence Techniques in the Software Developm...A Survey on Using Artificial Intelligence Techniques in the Software Developm...
A Survey on Using Artificial Intelligence Techniques in the Software Developm...
 
295B_Report_Sentiment_analysis
295B_Report_Sentiment_analysis295B_Report_Sentiment_analysis
295B_Report_Sentiment_analysis
 
Presentation: Tool Support for Essential Use Cases to Better Capture Software...
Presentation: Tool Support for Essential Use Cases to Better Capture Software...Presentation: Tool Support for Essential Use Cases to Better Capture Software...
Presentation: Tool Support for Essential Use Cases to Better Capture Software...
 
IRJET- Automated Essay Evaluation using Natural Language Processing
IRJET- Automated Essay Evaluation using Natural Language ProcessingIRJET- Automated Essay Evaluation using Natural Language Processing
IRJET- Automated Essay Evaluation using Natural Language Processing
 
ALGORITHM FOR TEXT TO GRAPH CONVERSION
ALGORITHM FOR TEXT TO GRAPH CONVERSION ALGORITHM FOR TEXT TO GRAPH CONVERSION
ALGORITHM FOR TEXT TO GRAPH CONVERSION
 
IRJET - Online Assignment Plagiarism Checking using Data Mining and NLP
IRJET -  	  Online Assignment Plagiarism Checking using Data Mining and NLPIRJET -  	  Online Assignment Plagiarism Checking using Data Mining and NLP
IRJET - Online Assignment Plagiarism Checking using Data Mining and NLP
 
Rankingtherefactoring techniques
Rankingtherefactoring techniquesRankingtherefactoring techniques
Rankingtherefactoring techniques
 
IRJET- Sentimental Analysis for Students’ Feedback using Machine Learning App...
IRJET- Sentimental Analysis for Students’ Feedback using Machine Learning App...IRJET- Sentimental Analysis for Students’ Feedback using Machine Learning App...
IRJET- Sentimental Analysis for Students’ Feedback using Machine Learning App...
 
Estimating the overall sentiment score by inferring modus ponens law
Estimating the overall sentiment score by inferring modus ponens lawEstimating the overall sentiment score by inferring modus ponens law
Estimating the overall sentiment score by inferring modus ponens law
 
project sentiment analysis
project sentiment analysisproject sentiment analysis
project sentiment analysis
 
IRJET - Twitter Sentiment Analysis using Machine Learning
IRJET -  	  Twitter Sentiment Analysis using Machine LearningIRJET -  	  Twitter Sentiment Analysis using Machine Learning
IRJET - Twitter Sentiment Analysis using Machine Learning
 
2. an efficient approach for web query preprocessing edit sat
2. an efficient approach for web query preprocessing edit sat2. an efficient approach for web query preprocessing edit sat
2. an efficient approach for web query preprocessing edit sat
 
C0441216
C0441216C0441216
C0441216
 
IRJET- Vernacular Language Spell Checker & Autocorrection
IRJET- Vernacular Language Spell Checker & AutocorrectionIRJET- Vernacular Language Spell Checker & Autocorrection
IRJET- Vernacular Language Spell Checker & Autocorrection
 
Robotics-Based Learning in the Context of Computer Programming
Robotics-Based Learning in the Context of Computer ProgrammingRobotics-Based Learning in the Context of Computer Programming
Robotics-Based Learning in the Context of Computer Programming
 
IRJET- Automated Exam Question Generator using Genetic Algorithm
IRJET-  	  Automated Exam Question Generator using Genetic AlgorithmIRJET-  	  Automated Exam Question Generator using Genetic Algorithm
IRJET- Automated Exam Question Generator using Genetic Algorithm
 
Zomato eda report
Zomato eda reportZomato eda report
Zomato eda report
 

Destacado (7)

Startabusiness
StartabusinessStartabusiness
Startabusiness
 
How To Make A Great Pbj
How To Make A Great PbjHow To Make A Great Pbj
How To Make A Great Pbj
 
Compoundthoughts Informationclarity
Compoundthoughts InformationclarityCompoundthoughts Informationclarity
Compoundthoughts Informationclarity
 
Cumplea Os Navidad
Cumplea Os NavidadCumplea Os Navidad
Cumplea Os Navidad
 
TW Travel&Money Intro Page
TW Travel&Money Intro PageTW Travel&Money Intro Page
TW Travel&Money Intro Page
 
Vivaelper..
Vivaelper..Vivaelper..
Vivaelper..
 
Comunicacion Oral
Comunicacion OralComunicacion Oral
Comunicacion Oral
 

Similar a Requirementv4

Requirment
RequirmentRequirment
Requirment
stat
 
Integrated Analysis of Traditional Requirements Engineering Process with Agil...
Integrated Analysis of Traditional Requirements Engineering Process with Agil...Integrated Analysis of Traditional Requirements Engineering Process with Agil...
Integrated Analysis of Traditional Requirements Engineering Process with Agil...
zillesubhan
 
MK_MSc_Degree_Project_Report ver 5_updated
MK_MSc_Degree_Project_Report ver 5_updatedMK_MSc_Degree_Project_Report ver 5_updated
MK_MSc_Degree_Project_Report ver 5_updated
Mohammed Ali Khan
 
Pawlik
PawlikPawlik
Pawlik
anesah
 
Preliminry report
 Preliminry report Preliminry report
Preliminry report
Jiten Ahuja
 

Similar a Requirementv4 (20)

Requirment
RequirmentRequirment
Requirment
 
Msr2021 tutorial-di penta
Msr2021 tutorial-di pentaMsr2021 tutorial-di penta
Msr2021 tutorial-di penta
 
Exploring the Efficiency of the Program using OOAD Metrics
Exploring the Efficiency of the Program using OOAD MetricsExploring the Efficiency of the Program using OOAD Metrics
Exploring the Efficiency of the Program using OOAD Metrics
 
Automatic Term Recognition with Apache Solr
Automatic Term Recognition with Apache SolrAutomatic Term Recognition with Apache Solr
Automatic Term Recognition with Apache Solr
 
International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)
 
Integrated Analysis of Traditional Requirements Engineering Process with Agil...
Integrated Analysis of Traditional Requirements Engineering Process with Agil...Integrated Analysis of Traditional Requirements Engineering Process with Agil...
Integrated Analysis of Traditional Requirements Engineering Process with Agil...
 
Text Summarization and Conversion of Speech to Text
Text Summarization and Conversion of Speech to TextText Summarization and Conversion of Speech to Text
Text Summarization and Conversion of Speech to Text
 
MK_MSc_Degree_Project_Report ver 5_updated
MK_MSc_Degree_Project_Report ver 5_updatedMK_MSc_Degree_Project_Report ver 5_updated
MK_MSc_Degree_Project_Report ver 5_updated
 
Pawlik
PawlikPawlik
Pawlik
 
EDR8204-7
EDR8204-7EDR8204-7
EDR8204-7
 
Preliminry report
 Preliminry report Preliminry report
Preliminry report
 
IET~DAVV STUDY MATERIALS SRS.docx
IET~DAVV STUDY MATERIALS SRS.docxIET~DAVV STUDY MATERIALS SRS.docx
IET~DAVV STUDY MATERIALS SRS.docx
 
1. introduction to data science —
1. introduction to data science —1. introduction to data science —
1. introduction to data science —
 
IRJET- Natural Language Query Processing
IRJET- Natural Language Query ProcessingIRJET- Natural Language Query Processing
IRJET- Natural Language Query Processing
 
CV2015
CV2015CV2015
CV2015
 
Class Diagram Extraction from Textual Requirements Using NLP Techniques
Class Diagram Extraction from Textual Requirements Using NLP TechniquesClass Diagram Extraction from Textual Requirements Using NLP Techniques
Class Diagram Extraction from Textual Requirements Using NLP Techniques
 
D017232729
D017232729D017232729
D017232729
 
Development of Computer Aided Learning Software for Use in Electric Circuit A...
Development of Computer Aided Learning Software for Use in Electric Circuit A...Development of Computer Aided Learning Software for Use in Electric Circuit A...
Development of Computer Aided Learning Software for Use in Electric Circuit A...
 
IRJET- Testing Improvement in Business Intelligence Area
IRJET- Testing Improvement in Business Intelligence AreaIRJET- Testing Improvement in Business Intelligence Area
IRJET- Testing Improvement in Business Intelligence Area
 
IRJET- Automated Document Summarization and Classification using Deep Lear...
IRJET- 	  Automated Document Summarization and Classification using Deep Lear...IRJET- 	  Automated Document Summarization and Classification using Deep Lear...
IRJET- Automated Document Summarization and Classification using Deep Lear...
 

Último

IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
Enterprise Knowledge
 

Último (20)

Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 

Requirementv4

  • 1. Requirement Analysis Version 0.4 by the Stat Team Mehrbod Sharifi Jing Yang The Stat Project, guided by Professor Eric Nyberg and Anthony Tomasic Feb. 25, 2009
  • 2. Chapter 1 Introduction to Stat In this chapter, we give an brief introduction to the Stat project to audience reading this document. We explain the background, the motivation, the scope, and the stakeholders of this project so that audience can understand why we are doing so, what we are going to do, and who may be interested in our project. 1.1 Overview Stat is an open source machine learning framework in Java for text analysis with focus on semi- supervised learning algorithms. Its main goal is to facilitate common textual data analysis tasks for researcher and engineers, so that they can get their works done straightforwardly and efficiently. Applying machine learning approaches to extract information and uncover patterns from tex- tual data has become extremely popular in recent years. Accordingly, many software have been developed to enable people to utilize machine learning for text analytics and automate such pro- cess. Users, however, find many of these existing software difficult to use, even if they just want to carry out a simple experiment; they have to spend much time learning those software and may finally find out they still need to write their own programs to preprocess data to get their target software running. We notice this situation and observe that many of these can be simplified. A new software framework should be developed to ease the process of doing text analytics; we believe researchers or engineering using our framework for textual data analysis would feel the process convenient, conformable, and probably, enjoyable. 1.2 Purpose Existing software with regard to using machine learning for linguistic analysis have tremendously helped researchers and engineers make new discoveries based on textual data, which is unarguably one of the most form of data in the real world. As a result, many more researchers, engineers, and possibly students are increasingly interested in using machine learning approaches in their text analytics. However, the bar for entering this area is not low. Those people, some of which even being experienced users, find existing software packages are not generally easy to learn and convenient to use. 1
  • 3. For example, although Weka has a comprehensive suite of machine learning algorithms, it is not designed for text analysis, lacking of naturally supported capabilities for linguistic concepts representation and processing. MinorThird, on the other hand, though designed specifically as a package for text analysis, turns out to be rather complicated and difficult to learn. It also does not support semi-supervised and unsupervised learning, which are becoming increasingly important machine learning approaches. Another problem for many existing packages is that they often adopt their own specific input and output format. Real-world textual data, however, are generally in other formats that are not readily understood by those packages. Researchers and engineers who want to make use of those packages often find themselves spending much time seeking or writing ad hoc format conversion code. These ad hoc code, which could have been reusable, are often written over and over again by different users. Researchers and engineers, when presented common text analysis tasks, usually want a text- specific, lightweight, reusable, understandable, and easy-to-learn package that help them get their works done efficiently and straightforwardly. Stat is designed to meet their requirements. Moti- vated by the needs of users who want to simplify their work and experiment related to textual data learning, we initiate the Stat project, dedicating to provide them suitable toolkits to facilitate their analytics task on textual data. In a nutshell, Stat is an open source framework aimed at providing researchers and en- gineers with a integrated set of simplified, reusable, and convenient toolkits for textual data analysis. Based on this framework, researchers can carry out their machine learning experiments on textual data conveniently and comfortably, and engineers can build their own small applications for text analytics straightforwardly and efficiently. From the comprehensiveness of features point of view, this framework may not be the most suitable one compared to other existing packages. However, we should dedicated to make all the code we write well-designed, efficient, and reliable. need change. 1.3 Scope This project involves developing a simplified and reusable framework (a collection of foundation classes) in Java that provides basic and common capabilities for people to easily perform machine learning analysis on various kind of textual data. Add what aspects specifically we will going to do here. 2
  • 4. 1.4 Stakeholders Below is the list of stakeholder and how this project will affect them: • Researchers, particularly in language technology but also in other fields, would be able to save time by focusing on their experiments instead of dealing with various input/output format which is routinely necessary in text processing. They can also easily switch between various tools available and even contribute to STAT so that others can save time by using their adaptors and algorithms. • Software engineers, who are not familiar with the machine learning can start using the package in their program with a very short learning phase. STAT can help them develop clear concepts of machine learning quickly. They can build their applications using functionality provided STAT easily and achieve high level performance. • Developers of learning package, can provide plug-ins for STAT to allow ease of integration of their package. They can also delegate some of the interoperability needs through this program (some of which may be more time consuming to be addressed within their own package). • Beginners to text processing and mining, who want fundamental and easy to learn capabilities involving discovering patterns from text. They will be benefited from this project by saving their time, facilitating their learning process, and sparking their interests to the area of language technology. 3
  • 5. Chapter 2 Survey Analysis This project was faced with many challenges from the beginning. There are many question, some of subjective nature, that really needs to be addresses by our target audience. For this reason, we designed a survey to obtain a better understanding and provide a more suitable solution to this problem. In this chapter, we explain the process of designing the survey, collecting information and some analysis of the collected data. 2.1 Designing the Survey The primary goals of doing a survey was the following: • Understanding the potential users of the package: their programming habit, problem solving strategies, experience in various area and tools, etc. • Setting priority for which criteria to focus on for our design and implementation The survey needed to be short and question to be very specific to get better responses. The maximum number of question was set at 10 questions. Several draft of the questions was reviewing within the STAT group and the software engineering class students and instructors several times until finalize. We also obtained and incorporate some advices from other departments. The final survey was designed on the SurveyMonkey.com. 2.2 Distribution The target users of STAT are two main groups with different needs: researchers and industry programmer. The survey contains questions to distinguish there two group but the final framework should address the needs from both groups. After conducting a test run with this the STAT group and the class, we sent the survey out to the Language Technology Institute student mailing list (representing researchers) and also to student in iLab (Prof, Ramayya Krishnan, Heinz School of Business) representing industry programmers. 2.3 Analysis of Results As of 2/25/09, we have received 23 responses and they are individually reviewed by STAT members and also in aggregate. Below we summarized the finding of the survey result and some charts: • While many different programming language are used (Python, R, C++) but over 90 4
  • 6. • Users don’t seem to distinguish much between industry and research applications and this is perhaps more research for the different to be transparent. • Most users are not familiar with Operation Research but everyone is somewhat familiar with Machine Learning (if not specifically text classification or data mining). • Data type expectedly were mostly textual (plain, XML, HTML, etc. as opposed to Excel, though it was mentioned) and sources were files, databases and web. • Over 50 • Easy of API use, Performance and Extensibility were the top three choice in design but in addition to those in textual descriptions user pointed out mostly problems with input and output formats. Charts to be added here... 5
  • 7. Chapter 3 Analysis of Related Packages In this chapter, we analyze a few main competitors of our projects. We focus on two academic toolkits – Weka and MinorThird. We comment on their strengths and explore their limitations, and discuss why and how we can do better than these competitors. 3.1 Weka Weka is a comprehensive collection of machine learning algorithms for solving data mining problems in Java and open sourced under the GPL. 3.1.1 Strengths of Weka Weka is a very popular software for machine learning, due to the its main strengths: • Provide comprehensive machine learning algorithms. Weka supports most current machine learning approaches for classification, clustering, regression, and association rules. • Cover most aspects for performing a full data mining process. In addition to learn- ing, Weka supports common data preprocessing methods, feature selection, and visualization. • Freely available. Weka is open source released under GNU General Public License. • Cross-platform. Weka is cross-platform fully implemented in Java. Because of its supports of comprehensive machine learning algorithm, Weka is often used for analytics in many form of data, including textual data. 3.1.2 Limitations of using Weka for text analysis However, Weka is not designed specifically for textual data analysis. The most critical drawback of using Weka for processing text is that Weka does not provide “built-in” constructs for natural representation of linguistics concepts1 . Users interested in using Weka for text analysis often find themselves need to write some ad-hoc programs for text preprocessing and conversion to Weka representation. • Not good at understanding various text format. Weka is good at understanding its standard .arff format, which is however not a convenient way of representation text. Users have to worry about how can they convert textual data in various original format such as 1 Though there are classes in Weka supporting basic natural language processing, they are viewed as auxiliary utilities. They make performing basic textual data processing using Weka possible, but not conveniently and straight- forwardly 6
  • 8. raw plain text, XML, HTML, CSV, Excel, PDF, MS Word, Open Office document, etc. to be understandable by Weka. As a result, they need to spend time seeking or writing external tools to complete this task before performing their actual analysis. • Unnecessary data type conversion. Weka is superior in processing nominal (aka, categor- ical) and numerical type attributes, but not string type. In Weka, non-numerical attributes are by default imported as nominal attributes, which usually is not a desirable type for text (imagine treating different chunks of text as different values of a categorical attribute). One have to explicitly use filters to do a conversion, which could have been done automatically if it knows you are importing text. • Lack of specialized supported for linguistics preprocessing. Linguistics preprocessing is a very important aspect of textual data analysis but not a concern of Weka. Weka does not (at least, not dedicated to) take care this issue very seriously for users. Weka has a StringToWordVector class that performs all-in-one basic linguistics preprocessing, including tokenization, stemming, stopword removal, tf-idf transformation, etc. However, it is less flexible and lack of other techniques (such as part-of-speech tagging and n-gram processing) for users who want fined grain and advanced linguistics controls. • Unnatural representation of textual data learning concepts. Weka is designed for general purpose machine learning tasks so have to protect too many variations. As a results, domain concepts in Weka are abstract and high-level, package hierarchy is deep, and the number of classes explodes. For example, we have to use Instance rather than Document and Instances rather than Corpus. Concepts in Weka such as Attribute is obscure in meaning for text processing. First adding many Attribute to a cryptic FastVector which then passed to a Instances in order to construct a dataset appears very awkward to users processing text. Categorize filters first according to attribute/instance then supervised /unsupervised make non-expert users feel confusing and hard to find their right filters. Many users may feel unconformable programmatically using Weka to carry out their experiments related to text. In summary, for users who want enjoyable experience at performing text analysis, they need built-in capabilities to naturally support representing and processing text. They need specialized and convenient tools that can help them finish most common text analysis tasks straightforwardly and efficiently. This cannot be done by Weka due to its general-purpose nature, despite its com- prehensive tools. 3.1.3 Detail design defects of Weka from the perspective of text analysis 7
  • 9. Figure 3.1: Partial domain model for Weka for basic text analysis 8
  • 10. Chapter 4 Requirements specifications Here we first explain in detail the major features of our framework. • Simplified. APIs are clear, consistent, and straightforward. Users with reasonable Java programming knowledge can learn our package without much efforts, understand its logical flow quickly, be able to get started within a small amount of time, and finish the most common tasks with a few lines of code. Since our framework is not designed for general purposes and for including comprehensive features, there are space for us to simplify the APIs to optimize for those most typical and frequent operations. • Reusable. Built-in modular supports are provided the core routines across various phases in text analysis, including text format transformation, linguistic processing, machine learning, and experimental evaluation. Additional functionalities can be extended on top of the core framework easily and user-defined specifications are pluggable. Existing code can be used cross environment and interoperate with external related packages, such as Weka, Minor- Third, and OpenNLP. (I use reusable instead of extendable because it cover a higher level of concept we might also need and able to follow, what’s your idea? ) • Any other? 4.1 Functional Requirements In this section, we define most common use cases of our framework and address them in the degree of detail of casual use case. The “functional requirements” of this project are that the users can use libraries provided by our framework to complete these use cases more easily and comfortably than not use. Actors Since our framework assumes that all users of interests are programming using our APIs, there is only one role of human actor, namely the programmer. This human actor is always the primary actor. There are some possible secondary and system actors, namely the external packages our framework integrates, depending on what specific use cases the primary actor is performing. 9
  • 11. Fully-dressed Use Cases Use Case UC1: Document Classification Experiment Scope: Text analysis application using STAT framework Level: User goal Primary Actor: Researcher Stakeholder and Interests: • Researcher: Want to test and evaluate a classification algorithm (supervised, semi- supervised or unsupervised) by applying it on a (probably well-known) corpus; the task needs to be done efficiently with easy and straightforward coding Preconditions: • STAT framework is correctly installed and configured • The corpus is placed on a source readable by STAT framework Postconditions: • A model is trained and test documents in the corpus are classified. Evaluation results are displayed Main Success Scenario: 1. Researcher imports the corpus from its source into memory. Specifically, the system reads data from the source, parses the raw format, extracts information according to the schema, and constructs an in-memory object to store the corpus 2. Researcher performs preprocessing on the corpus. Specifically, for each document, the researcher tokenizes the text, removes the stopwords, performs stemming on the tokens, performs filtering, and/or other potential preprocessing on body text and meta data 3. Researcher converts the corpus into the feature vectors needed for machine learning. The feature vectors are created by analyzing the documents in the corpus, deriving or filtering features, adding or removing documents, sampling documents, handling missing entries, normalizing features, selecting features, and/or other potential processing 4. Researcher splits the processed corpus into training and testing set 5. Researcher chooses a machine learning algorithm, set its parameters, and uses it to train a model from the training set 6. Researcher classifies the documents in the test set based on the model trained 7. Researcher evaluates the classification based on classification results obtained on the test set and its true labels. Classification is evaluated mainly on classification accuracy and classification time or if it is unsupervised, on other unsupervised metrics such as Adjusted Rand Index. 8. Researcher displays the final evaluation result 10
  • 12. Use Case UC1: Document Classification Experiment (cont.) Extensions: 1a. The framework is unable to find the specified source. 1. Throw source not found exception 1b. Researcher loads a previously saved corpus in native format from a file on the disk directly to memory object, thus researcher does not handle source, format, or schema explicitly. 1a. File not found: 1.Throw file not found exception 1b. Malformed native format: 1.Throw malformed native format exception 4a. Researcher specify a parameter k larger than the number of document or smaller than 1 1. Throw invalid argument exception 1-3, 5a. Researcher saves the in-memory objects of different level of processed corpus rep- resentation to disk in native format which can be loaded back lately, after finishing each step. 1-3, 5b. Research exports the in-memory objects of different processed corpus representation to disk in external formats (e.g., weka arff, csv) which can be processed by external software. 6a. Researcher saves the in-memory model object to disk, which can be loaded back lately. 6b. Researcher loads a previously saved model in native format from a file on the disk directly to memory object. 1a. File not found: 1. Throw file not found exception 1b. Malformed native format: 1.Throw malformed native format exception 4-8b. To perform k-fold cross validation, the corpus is split to k parts in step 4, and steps 5-8 are repeated k-times by switching each split a testing split and the rest as training. Researcher combines the evaluations on different test sets obtained in the previous steps and forms a final classification evaluations 6c. Unsupported learning parameters (the learning algorithm cannot handle the combination of parameters the researcher specifies) 1. Throw unsupported learning parameters exception 6d. Unsupported learning capability (the learning algorithm cannot handle the format and data in training set, potentially caused by unsupported feature type, class type, missing values, etc). 1. Identify exception cause(s) 2. Throw corresponding exception(s) 8a. Incompatible between test set and classification (potentially caused by difference in schema between training set and test set) 1. Throw incompatible evaluation exception 11
  • 13. Use Case UC1: Document Classification Experiment (cont.) 10a. The researcher customizes the display instead of using the default display format. 1.The researcher obtains specific fields of the evaluations via interfaces provided 2.The researcher constructs a customized format using the fields he/she extracts 3.The researcher display it customized format and/or write to a destination Special Requirements: • Pluggable preprocessors in step 2-3 • Pluggable learning algorithm in step 6 • Learning algorithm should be scalable to deal with large corpus • Researcher should be able to visualize results after various steps to trace the state of different objects (e.g., preprocessed corpus, models, classifications, evaluations) • Researcher should be able to customize the visualization output Open Issues: • How to address the variations issues in reading different sources • How to (in what form) let research specify parameters for different learning algorithms • What specifically need to be able to export, persist, and visualize? • How to implement the corpus splitting in an efficient way (dont create extra objects) • How to deal with performance issues of storing large corpora in the memory • How to deal with internal representation of the dataset in efficient data structure 4.2 Non-functional Requirements • Open source. It should be made available for public collaboration, allowing users to use, change, improve, and redistribute the software. • Portability. It should be consistently installed, configured, and run independent to different platforms, given its design and implementation on Java runtime environment. • Documentation. Its code should be readable, self-explained, and documented clearly and unambiguously for critical or tricky part. It should include an introduction guide for users to get started, and preferably, provides sample dataset, tutorial, and demos for user to run examples out of the box. • Performance. It should be able to response to user within reasonable amount of time given a limited amount of data (unclear, need specify). Preferably, it can estimate the running time needed to perform a task and notify user before user actually execute the task (is this the responsibility for framework designers? ) • Dependency. It is actually a issue. The package integrates other external packages and has many dependency. How to resolve this issue? How do we distribute our package? 12