├── .gitignore
├── CONTRIBUTING.md
├── INSTALL.md
├── LICENSE
├── README.md
├── app
├── __init__.py
├── main
│ ├── __init__.py
│ └── forms.py
├── models.py
├── static
│ ├── css
│ │ └── style.css
│ └── images
│ │ ├── ajax-loader.gif
│ │ ├── bronze-cup.png
│ │ ├── dials.jpg
│ │ ├── gold-cup.png
│ │ ├── platinum-cup.png
│ │ └── silver-cup.png
├── templates
│ ├── base.html
│ ├── index.html
│ ├── metric.html
│ ├── metrics.html
│ ├── metrics_awards.html
│ ├── metrics_scores.html
│ ├── metrics_select.html
│ ├── software.html
│ └── submit.html
└── views.py
├── config.py.template
├── docs
├── SAF_ER.png
├── UI Mockups
│ ├── Criteria.png
│ ├── Results.png
│ ├── Select_Criteria.png
│ ├── Submit.png
│ ├── cup.xcf
│ ├── rosette.png
│ └── saf.bmpr
├── comparing_research_software.pdf
├── entities.md
├── requirements.md
└── roadmap.md
├── plugins
├── __init__.py
├── metric
│ ├── __init__.py
│ ├── contributing.py
│ ├── contributing.yapsy-plugin
│ ├── documentation_developer.py
│ ├── documentation_developer.yapsy-plugin
│ ├── documentation_user.py
│ ├── documentation_user.yapsy-plugin
│ ├── freshness.py
│ ├── freshness.yapsy-plugin
│ ├── license.py
│ ├── license.yapsy-plugin
│ ├── metric.py
│ ├── readme.py
│ ├── readme.yapsy-plugin
│ ├── usability
│ │ └── __init__.py
│ ├── vitality.py
│ └── vitality.yapsy-plugin
└── repository
│ ├── __init__.py
│ ├── bitbucket.py
│ ├── bitbucket.yapsy-plugin
│ ├── github.py
│ ├── github.yapsy-plugin
│ └── helper.py
├── requirements.txt
└── run.py
/.gitignore:
--------------------------------------------------------------------------------
1 | venv/
2 | .idea/
3 | __pycache__
4 | data.sqlite
5 | saf.log*
6 | config.py
7 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # How to contribute
2 |
3 | ## **Do you want to suggest a new metric?**
4 |
5 | * We'd love to hear what you think makes good metrics - we're taking suggestions through the GitHub issues system.
6 |
7 | * To suggest a new metric:
8 | 1. **Check the metric has not been suggested / implemented already** by searching on GitHub under [Issues with the label "metric"](https://github.com/softwaresaved/software-assessment-framework/issues?q=is%3Aissue+label%3Ametric)
9 | 2. If it's not there, open a [new issue](https://github.com/softwaresaved/software-assessment-framework/issues/new). Be sure to include a **title and clear description of your metric**, with as much relevant information as possible, for instance what category of metric it is, links to background references / material, and a description of what the metric measures. Make sure you apply the **"Metric"** label to the issue.
10 |
11 | ## **Did you find a bug?**
12 |
13 | * **Ensure the bug was not already reported** by searching on GitHub under [Issues](https://github.com/softwaresaved/software-assessment-framework/issues).
14 |
15 | * If you're unable to find an open issue addressing the problem, [open a new one](https://github.com/softwaresaved/software-assessment-framework/issues/new). Be sure to include a **title and clear description**, as much relevant information as possible, and a **code sample** or an **executable test case** demonstrating the expected behavior that is not occurring.
16 |
17 | ## **Did you write a patch that fixes a bug?**
18 |
19 | * Open a new GitHub pull request with the patch.
20 |
21 | * Ensure the pull request description clearly describes the problem and solution. Include the relevant issue number if applicable.
22 |
23 | ## **Do you intend to add a new feature or change an existing one?**
24 |
25 | * We're particularly keen to extend the range of software metrics on offer - take a look at [existing set](https://github.com/softwaresaved/software-assessment-framework/tree/master/plugins/metric) and extend the [Base Metric class](https://github.com/softwaresaved/software-assessment-framework/blob/master/plugins/metric/metric.py)
26 |
27 | * You could also extend the range of [Repository Helper](https://github.com/softwaresaved/software-assessment-framework/blob/master/plugins/repository/helper.py) plugins to support new code repositories.
28 |
29 | * Create a [new issue](https://github.com/softwaresaved/software-assessment-framework/issues/new) suggesting your change. **On your issue, please include a reproducible case that we can actually run.**
30 |
31 | * After you have collected some positive feedback about the change you should start writing code and when your code is ready submit a pull request.
32 |
33 | ## **Do you have questions about the source code?**
34 |
35 | * Ask any question as a [new issue](https://github.com/softwaresaved/software-assessment-framework/issues/new).
36 |
37 | Thanks!
38 |
39 | The Software Sustainability Institute Team
40 |
--------------------------------------------------------------------------------
/INSTALL.md:
--------------------------------------------------------------------------------
1 | # Installing the Software Assessment Framework (SAF)
2 |
3 | ## Requirements
4 | SAF has been developed using Python 3.5, it uses language features and syntax not supported in Python 2.x
5 |
6 | ## Installation
7 | Use of a Python Virtual Environment [venv](https://docs.python.org/3/library/venv.html) is suggested
8 |
9 | ### Ubuntu Linux
10 | 1. Setup and activate the Virtual Environment
11 |
12 | `$ virtualenv -p python3 venv`
13 |
14 | `$ source venv/bin/activate`
15 |
16 | 2. Clone the SAF GitHub repository:
17 |
18 | `$ git clone https://github.com/softwaresaved/software-assessment-framework.git`
19 |
20 | 3. Install the prerequisite Python packages as described in requirement.txt:
21 |
22 | `$ cd software-assessment-framework`
23 |
24 | `$ pip install -r requirements.txt`
25 |
26 | ### MacOS
27 | The installation instructions for MacOS are the same as those for Ubuntu Linux.
28 |
29 | ### Windows
30 | 1. Install python3.6, anaconda and git-bash - ensure path environment variables are installed during anaconda installation.
31 |
32 | 2. setup and activate the virtual environment
33 |
34 | `$ virtualenv venv`
35 |
36 | `$ source venv/Scripts/activate`
37 |
38 | 3. clone the SAF GitHub repository
39 |
40 | `$ git clone https://github.com/UserName/software-assessment-framework.git`
41 |
42 | 4. install the prerequisite python packages as described in requirement.txt
43 |
44 | `$ cd software-assessment-framework`
45 |
46 | `$ pip install -r requirements.txt`
47 |
48 |
49 |
50 |
51 | ### Using conda instead of venv
52 |
53 | If you use conda to manage virtual environments, replace step 1 as follows:
54 |
55 | 1. Set up and activate the conda virtual environment
56 |
57 | `$ conda create --name saf python=3.6`
58 |
59 | `$ source activate saf`
60 |
61 |
62 | ## Configuration
63 | Make a copy of config.py.template as config.py
64 | Some operations employ the GitHub API, and require a GitHub [Personal Access Token](https://github.com/settings/tokens) to be generated and inserted into `config.py`
65 |
66 | ## Running
67 | run.py starts the web app, which will be accessible on http://localhost:5000
68 |
69 | `python run.py`
70 |
71 |
72 |
73 |
74 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Copyright 2017 The University of Southampton on behalf of the Software Sustainability Institute
2 |
3 | Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
4 |
5 | 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
6 |
7 | 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
8 |
9 | 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
10 |
11 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # software-assessment-framework
2 | Codebase for facilitating the assessment of different criteria for research software
3 |
4 | The Software Assessment Framework is a project to make it easier for developers
5 | to understand the "quality" of a piece of research software, which in turn will allow them to
6 | improve software reuse and increase recognition for good software development practice.
7 |
8 | To ensure adoption and impact, it is important that the use of this framework
9 | is bottom-up (encouraging code owners to be proactive in getting their codes assessed);
10 | easy-to-use (with objective measures);
11 | simple (to avoid confusion);
12 | enables community norms (be understanding of the relative priorities of different communities);
13 | and minimises game playing.
14 |
15 | ## Assessment Categories
16 |
17 | The top level assessment categories are:
18 | * Availability: can a user find the software (discovery) and can they obtain the software (access)?
19 | * Usability: can a user understand the operation of the software, such that they can use it, integrate it with other software, and extend or modify it?
20 | * Maintainability: what is the likelihood that the software can be maintained and developed over a period of time?
21 | * Portability: what is the capacity for using the software in a different area, field, or environment?
22 |
23 | ## Contributing to this project
24 |
25 | Want to suggest a [metric](https://github.com/softwaresaved/software-assessment-framework/issues?q=is%3Aissue+label%3Ametric), report a [bug](https://github.com/softwaresaved/software-assessment-framework/issues?q=is%3Aopen+is%3Aissue+label%3Abug), or contribute your time and experience to the project? Please see our [Guidelines for Contributing](https://github.com/softwaresaved/software-assessment-framework/blob/master/CONTRIBUTING.md).
26 |
27 | ## Planning
28 |
29 | * Requirements
30 | * Roadmap
31 |
32 | ## Acknowledgements
33 |
34 | This work has been funded by EPSRC through additional support to grant EP/H043160/1 and EPSRC, ESRC and BBSRC through grant EP/N006410/1.
35 |
--------------------------------------------------------------------------------
/app/__init__.py:
--------------------------------------------------------------------------------
1 | from flask import Flask
2 | from flask_bootstrap import Bootstrap
3 | import os
4 |
5 | app = Flask(__name__)
6 | app.config['SECRET_KEY'] = os.urandom(24)
7 | Bootstrap(app)
8 |
9 | from app import views
10 |
11 |
12 |
--------------------------------------------------------------------------------
/app/main/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/app/main/__init__.py
--------------------------------------------------------------------------------
/app/main/forms.py:
--------------------------------------------------------------------------------
1 | from flask_wtf import FlaskForm
2 | from wtforms import StringField, SubmitField, TextAreaField, BooleanField, SelectMultipleField, widgets
3 | from wtforms.validators import DataRequired
4 |
5 | # Forms for webapp. Uses WTForms
6 |
7 |
8 | class SoftwareSubmitForm(FlaskForm):
9 | """
10 | Form for initial submission of software for evaluation
11 | """
12 | name = StringField('Software Name', validators=[DataRequired()])
13 | description = TextAreaField('Description', render_kw={"rows": 10})
14 | version = StringField('Version')
15 | url = StringField('Repository URL', validators=[DataRequired()])
16 | api_token = StringField('API Token (private repositories only)')
17 | submit = SubmitField('Submit')
18 |
--------------------------------------------------------------------------------
/app/models.py:
--------------------------------------------------------------------------------
1 | from . import app
2 | from flask_sqlalchemy import SQLAlchemy
3 | import datetime
4 | import os
5 |
6 |
7 | # SQLAlchemy setup
8 | basedir = os.path.abspath(os.path.dirname(__file__))
9 | app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///'+os.path.join(basedir, '../data.sqlite')
10 | app.config['SQLALCHEMY_COMMIT_ON_TEARDOWN'] = True
11 |
12 | db = SQLAlchemy(app)
13 |
14 |
15 | class Software(db.Model):
16 | """
17 | Entity class for an item of software submitted for assessment
18 | """
19 | __tablename__ = 'software'
20 | id = db.Column(db.Integer, primary_key=True, autoincrement=True)
21 | name = db.Column(db.Text)
22 | description = db.Column(db.Text)
23 | version = db.Column(db.Text)
24 | submitter = db.Column(db.Text)
25 | submitted = db.Column(db.DateTime, default=datetime.datetime.now())
26 | url = db.Column(db.Text)
27 | scores = db.relationship('Score', backref='software', lazy='dynamic')
28 |
29 |
30 | class Score(db.Model):
31 | """
32 | Entity class for the result of running a metric against an item of software
33 | """
34 | __tablename__ = 'score'
35 | id = db.Column(db.Integer, primary_key=True, autoincrement=True)
36 | software_id = db.Column(db.Integer, db.ForeignKey('software.id'))
37 | name = db.Column(db.Text)
38 | identifier = db.Column(db.Text)
39 | category = db.Column(db.Text)
40 | short_description = db.Column(db.Text)
41 | long_description = db.Column(db.Text)
42 | interactive = db.Column(db.Boolean)
43 | value = db.Column(db.Integer)
44 | feedback = db.Column(db.Text)
45 | category_importance = db.Column(db.Integer)
46 | metric_importance = db.Column(db.Integer)
47 | updated = db.Column(db.TIMESTAMP, server_default=db.func.now(), onupdate=db.func.current_timestamp())
48 |
49 | def __init__(self, software_id, name, identifier, category, short_description, long_description, interactive, value,
50 | feedback, category_importance=1, metric_importance=1):
51 | self.software_id = software_id
52 | self.name = name
53 | self.identifier = identifier
54 | self.category = category
55 | self.short_description = short_description
56 | self.long_description = long_description
57 | self.interactive = interactive
58 | self.value = value
59 | self.feedback = feedback
60 | self.category_importance = category_importance
61 | self.metric_importance = metric_importance
62 |
63 |
64 | # Create database if required
65 | if not os.path.exists(app.config['SQLALCHEMY_DATABASE_URI']):
66 | app.logger.info("Creating tables in ./data.sqlite")
67 | db.create_all()
68 |
--------------------------------------------------------------------------------
/app/static/css/style.css:
--------------------------------------------------------------------------------
1 | .assessment-categories {
2 | font-size: 1.2em;
3 | margin: 15px 0px 25px 0px;
4 | padding: 0px 0px 10px 0px;
5 | }
6 | .assessment-category {
7 | font-weight: bold;
8 | }
9 |
10 | .cup {
11 | height: 192px;
12 | width: 192px;
13 | display: block;
14 | margin-bottom: 15px;
15 | margin-left: auto;
16 | margin-right: auto;
17 | margin-top: 15px;
18 | }
19 |
20 | .cup-arrow {
21 | display: block;
22 | font-size: 50px;
23 | text-align: center;
24 | }
25 |
26 | .cup-inactive {
27 | opacity: 0.3;
28 | }
29 |
30 | .cup-ruler {
31 | border-top: 2px solid;
32 | border-left: 2px solid;
33 | height: 10px;
34 | margin-bottom: 10px;
35 | }
36 |
37 | .footer-page {
38 | margin: 50px 0px 20px 0px;
39 | }
40 |
41 | .header-image {
42 | width: 100%;
43 | margin: -20px 0px 20px 0px;
44 |
45 | }
46 | /** Prevent range controls taking up the whole line breaking labels **/
47 | .importance {
48 | padding: 0px 2px 0px 2px;
49 | border: 1px;
50 | border-color: rgb(188, 232, 241);
51 | border-style: solid;
52 | border-radius: 4px;
53 | background-color: white;
54 | font-size: 12px;
55 | }
56 |
57 | input[type="range"] {
58 | width:100px;
59 | display:inline;
60 | padding: 5px;
61 | vertical-align: middle;
62 | }
63 |
64 | input[type="radio"] {
65 | vertical-align: -3px;
66 | }
67 |
68 | input[type="checkbox"] {
69 | vertical-align: -3px;
70 | }
71 |
72 | .metric-row {
73 | display: inline;
74 | margin-bottom: 10px;
75 | }
76 |
77 | .metric-question {
78 | font-weight: normal;
79 | display: inline;
80 | }
81 |
82 | .metric-question li {
83 | list-style: none;
84 | }
85 |
86 |
87 | .metrics-award-text {
88 | margin: 10px 5px 5px 5px;
89 | color: gray;
90 | text-decoration: line-through;
91 | font-size: 1.2em;
92 | }
93 |
94 | .metrics-award-text-passed {
95 | color: initial ;
96 | text-decoration: none;
97 | }
98 |
99 | h2, h3, h4, h5 {
100 | font-weight: bold;
101 | }
102 |
103 | h4.preamble {
104 | margin-bottom: 20px;
105 | }
106 |
107 | h1.award-level {
108 | margin-top: 50px;
109 | margin-bottom: 50px;
110 | font-size: 4.0em;
111 | }
112 |
113 |
114 |
115 | ul li label {
116 | display: inline;
117 | font-weight: normal;
118 | }
119 |
120 |
--------------------------------------------------------------------------------
/app/static/images/ajax-loader.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/app/static/images/ajax-loader.gif
--------------------------------------------------------------------------------
/app/static/images/bronze-cup.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/app/static/images/bronze-cup.png
--------------------------------------------------------------------------------
/app/static/images/dials.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/app/static/images/dials.jpg
--------------------------------------------------------------------------------
/app/static/images/gold-cup.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/app/static/images/gold-cup.png
--------------------------------------------------------------------------------
/app/static/images/platinum-cup.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/app/static/images/platinum-cup.png
--------------------------------------------------------------------------------
/app/static/images/silver-cup.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/app/static/images/silver-cup.png
--------------------------------------------------------------------------------
/app/templates/base.html:
--------------------------------------------------------------------------------
1 | {% extends "bootstrap/base.html" %}
2 | {% import "bootstrap/wtf.html" as wtf %}
3 | {% block head %}
4 | {{ super() }}
5 |
6 | {% endblock %}
7 | {% block title %} Software Assessment Framework {% endblock %}
8 | {% block navbar %}
9 |
10 |
Understanding the quality of your research software
8 |
The Software Assessment Framework is a tool to make it
9 | easier for developers to understand the "quality" of a piece
10 | of research software, which in turn will allow them to improve
11 | software reuse and increase recognition for good software development practice.
12 |
13 |
14 |
15 |
Assessment Categories
16 |
The top level assessment categories are:
17 |
18 |
19 |
Availability: can a user find the software (discovery) and can they obtain the software (access)?
20 |
Usability: can a user understand the operation of the software, such that they can use it, integrate it with other software, and extend or modify it?
21 |
Maintainability: what is the likelihood that the software can be maintained and developed over a period of time?
22 |
Portability: what is the capacity for using the software in a different area, field, or environment?
23 |
7 | {{ wtf.quick_form(form) }}
8 | {% endblock %}
9 |
--------------------------------------------------------------------------------
/app/views.py:
--------------------------------------------------------------------------------
1 | from flask import render_template, redirect, url_for, session, flash, abort
2 | from flask_wtf import FlaskForm
3 | from wtforms import RadioField, SubmitField, FormField, BooleanField
4 | from wtforms.fields.html5 import IntegerRangeField
5 | from wtforms.validators import DataRequired
6 | from app.main.forms import SoftwareSubmitForm
7 | from app.models import db, Software, Score
8 | from app import app
9 | import hashlib
10 | from plugins.repository.helper import *
11 | import plugins.metric
12 |
13 |
14 | # Routes and views
15 |
16 | @app.route('/', methods=['GET', 'POST'])
17 | @app.route('/index')
18 | def index():
19 | """
20 | Index Page
21 | :return:
22 | """
23 | return render_template('index.html')
24 |
25 |
26 | @app.route('/software', methods=['GET'])
27 | def browse_software():
28 | """
29 | Browse Software Submissions
30 | :return:
31 | """
32 | sw_all = Software.query.group_by(Software.name, Software.version)
33 | return render_template('software.html', sw_all=sw_all)
34 |
35 |
36 | @app.route('/metrics', methods=['GET'])
37 | def browse_metrics():
38 | """
39 | List Metrics
40 | :return:
41 | """
42 | all_metrics = plugins.metric.load()
43 |
44 | # Todo - move this all out to a separate module. Define metric categories and descriptions, loading and filtering
45 |
46 | metrics_availabilty = []
47 | metrics_usability = []
48 | metrics_maintainability = []
49 | metrics_portability = []
50 | for metric in all_metrics:
51 | if metric.CATEGORY == "AVAILABILITY":
52 | metrics_availabilty.append(metric)
53 | if metric.CATEGORY == "USABILITY":
54 | metrics_usability.append(metric)
55 | if metric.CATEGORY == "MAINTAINABILITY":
56 | metrics_maintainability.append(metric)
57 | if metric.CATEGORY == "PORTABILITY":
58 | metrics_portability.append(metric)
59 |
60 | a_meta = {"category_name": "Availability",
61 | "category_description": "Can a user find the software (discovery) and can they obtain the software (access)?",
62 | "metrics": metrics_availabilty}
63 |
64 | u_meta = {"category_name": "Usability",
65 | "category_description": "Can a user understand the operation of the software, such they can use it, integrate it with other software and extend or modify it?",
66 | "metrics": metrics_usability}
67 |
68 | m_meta = {"category_name": "Maintainability",
69 | "category_description": "What is the likelihoood that the software can be maintained and developed over a period of time?",
70 | "metrics": metrics_maintainability}
71 |
72 | p_meta = {"category_name": "Portability",
73 | "category_description": "What is the capacity for using the software in a different area, field or environment?",
74 | "metrics": metrics_portability}
75 |
76 | return render_template('metrics.html',
77 | a_meta=a_meta,
78 | u_meta=u_meta,
79 | m_meta=m_meta,
80 | p_meta=p_meta)
81 |
82 |
83 | @app.route('/metrics/', methods=['GET'])
84 | def show_metric(metric_identifier):
85 | """
86 | Display a single metric
87 | :param metric_identifier:
88 | :return:
89 | """
90 | the_metric = None
91 | metrics_all = plugins.metric.load()
92 | for m in metrics_all:
93 | if m.IDENTIFIER == metric_identifier:
94 | the_metric = m
95 |
96 | if the_metric is None:
97 | # We don't recognise that id
98 | abort(404)
99 |
100 | return render_template('metric.html', metric=the_metric)
101 |
102 |
103 | @app.route('/submit', methods=['GET', 'POST'])
104 | def submit_software():
105 | """
106 | Submit software for assessment
107 | :return:
108 | """
109 | software_submit_form = SoftwareSubmitForm()
110 | failed = False
111 | # Is this a form submission?
112 | if software_submit_form.validate_on_submit():
113 | app.logger.info("Received a software submission: "+software_submit_form.url.data)
114 |
115 | # Examine the URL, locate a RepositoryHelper to use
116 | repos_helper = find_repository_helper(software_submit_form.url.data)
117 | if repos_helper is None:
118 | failed = True
119 | fail_message = "Sorry, we don't yet know how to talk to the repository at: " + software_submit_form.url.data
120 | app.logger.error("Unable to find a repository helper for: " + software_submit_form.url.data)
121 | else:
122 | # try to login
123 | try:
124 | if software_submit_form.api_token.data != "":
125 | session['api_token'] = software_submit_form.api_token.data
126 | repos_helper.api_token = software_submit_form.api_token.data
127 | repos_helper.login()
128 | except RepositoryHelperError as err:
129 | failed = True
130 | fail_message = err.message
131 | app.logger.error("Unable to log in to the repository, check URL and permissions. Supply an API token if private.")
132 |
133 | if failed:
134 | flash(fail_message)
135 | return redirect(url_for('submit_software'))
136 | else:
137 | # Create a new Software instance
138 | sw = Software(id=None,
139 | name=software_submit_form.name.data,
140 | description=software_submit_form.description.data,
141 | version=software_submit_form.version.data,
142 | submitter='User',
143 | url=software_submit_form.url.data)
144 | # Persist it
145 | db.session.add(sw)
146 | db.session.commit()
147 | # Add software_id to Flask session
148 | session['sw_id'] = sw.id
149 | # Forward to interactive (self-assessment) metrics selection
150 | return redirect(url_for('metrics_interactive'))
151 |
152 | return render_template('submit.html', form=software_submit_form)
153 |
154 |
155 | @app.route('/submit/self_assessment', methods=['GET', 'POST'])
156 | def metrics_interactive():
157 | """
158 | Self-Assessment Metrics Selection
159 | :return:
160 | """
161 | # Load the software from the id stored in the session
162 | # NB - We use the software_id from the session, rather than from the request,
163 | # this prevents users other than the submitter changing the metrics to be run
164 | sw = Software.query.filter_by(id=session['sw_id']).first()
165 | # Load interactive metrics
166 | app.logger.info("Finding Self-Assessment metrics")
167 | # FixMe - implement a category based filter for plugin loading to avoid repetition below
168 | all_metrics = plugins.metric.load()
169 |
170 | # In order to be able to separate, and label the categories, we need to create *individual* sub-form classes
171 | # To dynamically add fields, we have to define the Form class at *runtime*, and instantiate it.
172 | # This feels *wrong* and *bad*, but it has to be done this way.
173 | class InteractiveMetricAvailabilityForm(FlaskForm):
174 | importance = IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1})
175 |
176 | class InteractiveMetricUsabilityForm(FlaskForm):
177 | importance = IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1})
178 |
179 | class InteractiveMetricMaintainabilityForm(FlaskForm):
180 | importance = IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1})
181 |
182 | class InteractiveMetricPortabilityForm(FlaskForm):
183 | importance = IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1})
184 |
185 | # Add metrics to their appropriate sub-forms classes (not instances)
186 | # FixMe - Because the choices come from a dictionary, the sort order is random
187 | for metric in all_metrics:
188 | if metric.SELF_ASSESSMENT:
189 | if metric.CATEGORY == "AVAILABILITY":
190 | setattr(InteractiveMetricAvailabilityForm, metric.IDENTIFIER,
191 | RadioField(label=metric.LONG_DESCRIPTION, choices=metric.get_ui_choices().items(), validators=[DataRequired()]))
192 | setattr(InteractiveMetricAvailabilityForm, "IMPORTANCE_" + metric.IDENTIFIER,
193 | IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1}))
194 | if metric.CATEGORY == "USABILITY":
195 | setattr(InteractiveMetricUsabilityForm, metric.IDENTIFIER,
196 | RadioField(label=metric.LONG_DESCRIPTION, choices=metric.get_ui_choices().items(), validators=[DataRequired()]))
197 | setattr(InteractiveMetricUsabilityForm, "IMPORTANCE_"+metric.IDENTIFIER,
198 | IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1}))
199 | if metric.CATEGORY == "MAINTAINABILITY":
200 | setattr(InteractiveMetricMaintainabilityForm, metric.IDENTIFIER,
201 | RadioField(label=metric.LONG_DESCRIPTION, choices=metric.get_ui_choices().items(), validators=[DataRequired()]))
202 | setattr(InteractiveMetricMaintainabilityForm, "IMPORTANCE_" + metric.IDENTIFIER,
203 | IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1}))
204 | if metric.CATEGORY == "PORTABILITY":
205 | setattr(InteractiveMetricPortabilityForm, metric.IDENTIFIER,
206 | RadioField(label=metric.LONG_DESCRIPTION, choices=metric.get_ui_choices().items(), validators=[DataRequired()]))
207 | setattr(InteractiveMetricPortabilityForm, "IMPORTANCE_" + metric.IDENTIFIER,
208 | IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1}))
209 |
210 | # Build the top-level form with the instances of the now populated sub-form classes.
211 | class InteractiveMetricRunForm(FlaskForm):
212 | ff_a = FormField(InteractiveMetricAvailabilityForm, label="Availability", description="Can a user find the software (discovery) and can they obtain the software (access)?")
213 | ff_u = FormField(InteractiveMetricUsabilityForm, label="Usability", description="Can a user understand the operation of the software, such they can use it, integrate it with other software and extend or modify it?")
214 | ff_m = FormField(InteractiveMetricMaintainabilityForm, label="Maintainability", description="What is the likelihoood that the software can be maintained and developed over a period of time?")
215 | ff_p = FormField(InteractiveMetricPortabilityForm, label="Portability", description="What is the capacity for using the software in a different area, field or environment?")
216 | submit = SubmitField('Next')
217 |
218 | # Get an instance of the top leve form
219 | interactive_metric_run_form = InteractiveMetricRunForm()
220 |
221 | # Deal with submission
222 | if interactive_metric_run_form.validate_on_submit():
223 | # Run the metrics
224 | run_interactive_metrics(interactive_metric_run_form.ff_u.data, all_metrics, sw)
225 | run_interactive_metrics(interactive_metric_run_form.ff_a.data, all_metrics, sw)
226 | run_interactive_metrics(interactive_metric_run_form.ff_m.data, all_metrics, sw)
227 | run_interactive_metrics(interactive_metric_run_form.ff_p.data, all_metrics, sw)
228 | # Forward to automater metrics
229 | return redirect(url_for('metrics_automated'))
230 |
231 | # Default action
232 |
233 | flash_errors(interactive_metric_run_form)
234 | return render_template('metrics_select.html', page_title="Self Assessment",
235 | form=interactive_metric_run_form,
236 | form_target="metrics_interactive",
237 | preamble="Answer the following questions about your software, indicating their importance to you.",
238 | software=sw)
239 |
240 |
241 | @app.route('/submit/automated', methods=['GET', 'POST'])
242 | def metrics_automated():
243 | """
244 | Automated Metrics Selection and Execution
245 | :return:
246 | """
247 | # Load the software from the id stored in the session
248 | # NB - We use the software_id from the session, rather than from the request,
249 | # this prevents users other than the submitter changing the metrics to be run
250 | sw = Software.query.filter_by(id=session['sw_id']).first()
251 | # Load automated metrics
252 | app.logger.info("Finding automated metrics")
253 | metrics = plugins.metric.load()
254 |
255 | # In order to be able to separate, and label the categories, we need to create *individual* sub-form classes
256 | # To dynamically add fields, we have to define the Form class at *runtime*, and instantiate it.
257 | # This feels *wrong* and *bad*, but it has to be done this way.
258 | class AutoMetricAvailabilityForm(FlaskForm):
259 | importance = IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1})
260 |
261 | class AutoMetricUsabilityForm(FlaskForm):
262 | importance = IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1})
263 |
264 | class AutoMetricMaintainabilityForm(FlaskForm):
265 | importance = IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1})
266 |
267 | class AutoMetricPortabilityForm(FlaskForm):
268 | importance = IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1})
269 |
270 | # Add metrics to their appropriate sub-forms classes (not instances)
271 | # FixMe - Because the choices come from a dictionary, the sort order is random
272 | for metric in metrics:
273 | if metric.SELF_ASSESSMENT:
274 | continue
275 | if metric.CATEGORY == "AVAILABILITY":
276 | setattr(AutoMetricAvailabilityForm, metric.IDENTIFIER,
277 | BooleanField(label=metric.SHORT_DESCRIPTION))
278 | setattr(AutoMetricAvailabilityForm, "IMPORTANCE_" + metric.IDENTIFIER,
279 | IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1}))
280 |
281 | if metric.CATEGORY == "USABILITY":
282 | setattr(AutoMetricUsabilityForm, metric.IDENTIFIER,
283 | BooleanField(label=metric.SHORT_DESCRIPTION))
284 | setattr(AutoMetricUsabilityForm, "IMPORTANCE_" + metric.IDENTIFIER,
285 | IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1}))
286 |
287 | if metric.CATEGORY == "MAINTAINABILITY":
288 | setattr(AutoMetricMaintainabilityForm, metric.IDENTIFIER,
289 | BooleanField(label=metric.SHORT_DESCRIPTION))
290 | setattr(AutoMetricMaintainabilityForm, "IMPORTANCE_" + metric.IDENTIFIER,
291 | IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1}))
292 |
293 | if metric.CATEGORY == "PORTABILITY":
294 | setattr(AutoMetricPortabilityForm, metric.IDENTIFIER,
295 | BooleanField(label=metric.SHORT_DESCRIPTION))
296 | setattr(AutoMetricPortabilityForm, "IMPORTANCE_" + metric.IDENTIFIER,
297 | IntegerRangeField("Importance to you:", render_kw={"value": 1, "min": 0, "max": 1}))
298 |
299 | # Build the top-level form with the instances of the now populated sub-form classes.
300 | class AutomatedMetricRunForm(FlaskForm):
301 | ff_a = FormField(AutoMetricAvailabilityForm, label="Availability",
302 | description="Can a user find the software (discovery) and can they obtain the software (access)?")
303 | ff_u = FormField(AutoMetricUsabilityForm, label="Usability",
304 | description="Can a user understand the operation of the software, such they can use it, integrate it with other software and extend or modify it?")
305 | ff_m = FormField(AutoMetricMaintainabilityForm, label="Maintainability",
306 | description="What is the likelihoood that the software can be maintained and developed over a period of time?")
307 | ff_p = FormField(AutoMetricPortabilityForm, label="Portability",
308 | description="What is the capacity for using the software in a different area, field or environment?")
309 | submit = SubmitField('Next')
310 |
311 | # Get an instance of the top level form
312 | automated_metric_run_form = AutomatedMetricRunForm()
313 |
314 | # Deal with submission
315 | if automated_metric_run_form.validate_on_submit():
316 | # Load the RepositoryHelper again
317 | if sw.url and sw.url != "".strip():
318 | repos_helper = find_repository_helper(sw.url)
319 | if 'api_token' in session:
320 | repos_helper.api_token = session['api_token']
321 | repos_helper.login()
322 |
323 | # Run the appropriate metrics
324 | run_automated_metrics(automated_metric_run_form.ff_u.data, metrics, sw, repos_helper)
325 | run_automated_metrics(automated_metric_run_form.ff_a.data, metrics, sw, repos_helper)
326 | run_automated_metrics(automated_metric_run_form.ff_m.data, metrics, sw, repos_helper)
327 | run_automated_metrics(automated_metric_run_form.ff_p.data, metrics, sw, repos_helper)
328 |
329 | # Forward to results display
330 | return redirect(url_for('metrics_scores', software_id=sw.id))
331 |
332 | flash_errors(automated_metric_run_form)
333 | return render_template('metrics_select.html', page_title="Automated Assessment",
334 | form=automated_metric_run_form,
335 | form_target="metrics_automated",
336 | preamble="Select from the following automated metrics to run against your repository, indicating their importance to you.",
337 | software=sw)
338 |
339 |
340 | @app.route('/scores/', methods=['GET'])
341 | def metrics_scores(software_id):
342 | """
343 | Metrics scores (raw scores - post submission)
344 | :param software_id:
345 | :return:
346 | """
347 | # Load the Software
348 | sw = Software.query.filter_by(id=software_id).first()
349 |
350 | # Load the scores
351 | availability_scores = Score.query.filter_by(software_id=software_id, category="AVAILABILITY")
352 | usability_scores = Score.query.filter_by(software_id=software_id, category="USABILITY")
353 | maintainability_scores = Score.query.filter_by(software_id=software_id, category="MAINTAINABILITY")
354 | portability_scores = Score.query.filter_by(software_id=software_id, category="PORTABILITY")
355 |
356 | return render_template('metrics_scores.html',
357 | software=sw,
358 | scores={"Availability": availability_scores, "Usability": usability_scores,
359 | "Maintainability": maintainability_scores, "Portability": portability_scores}
360 | )
361 |
362 |
363 | @app.route('/awards/', methods=['GET'])
364 | def metrics_awards(software_id):
365 | """
366 | Calculate the awards, based on the scores
367 | :param software_id:
368 | :return:
369 | """
370 | # Load the Software
371 | sw = Software.query.filter_by(id=software_id).first()
372 |
373 | if sw is None:
374 | # We don't recognise that id
375 | abort(404)
376 |
377 | award = None
378 | if has_bronze_award(software_id):
379 | award = "Bronze"
380 | app.logger.info("Passed Bronze")
381 | if has_silver_award(software_id):
382 | award = "Silver"
383 | app.logger.info("Passed Silver")
384 | else:
385 | app.logger.info("Failed Bronze")
386 |
387 | # Find the most recent assessment
388 | assessment_date = Score.query.filter_by(software_id=software_id).order_by(Score.updated).first().updated
389 |
390 | return render_template('metrics_awards.html',
391 | software=sw,
392 | award=award,
393 | assessment_date=assessment_date)
394 |
395 |
396 | def has_bronze_award(software_id):
397 | """
398 | Ascertain if the piece of software had passed metrics to achieve a bronze award
399 | :param software_id:
400 | :return: True if passed, otherwise false
401 | """
402 | # FixMe - this is not elegant and depends on lots of assumws knowledge about the metrics
403 | # Bronze requires: Having a License and Readme.
404 | # FixMe - may be >1 score if user has gone back and forth in the UI
405 | # prolly best to stop that happening in the first place
406 |
407 | # License
408 | # FixMe - implement get_score
409 | license_scores = Score.query.filter_by(software_id=software_id, short_description="Has a license file?")
410 | license_score = license_scores.first().value
411 | app.logger.info("License Score: " + str(license_score))
412 | if license_score < 50:
413 | return False
414 |
415 | # ReadMe
416 | readme_scores = Score.query.filter_by(software_id=software_id, short_description="Has a README file?")
417 | readme_score = readme_scores.first().value
418 | app.logger.info("README Score: " + str(readme_score))
419 | if readme_score != 100:
420 | return False
421 |
422 | return True
423 |
424 |
425 | def has_silver_award(software_id):
426 | """
427 | Ascertain if the piece of software had passed metrics to achieve a silver award
428 | :param software_id:
429 | :return: True if passed, otherwise false
430 | """
431 | # FixMe - this is not elegant and depends on lots of assumed knowledge about the metrics
432 | # Silver requires: Having a License and Readme.
433 | # FixMe - may be >1 score if user has gone back and forth in the UI
434 | # prolly best to stop that happening in the first place
435 |
436 | # Vitality
437 | vitality_scores = Score.query.filter_by(software_id=software_id, short_description="Calculate committer trend")
438 | vitality_score = vitality_scores.first().value
439 | app.logger.info("Vitality Score: " + str(vitality_score))
440 | if vitality_score < 50:
441 | return False
442 |
443 | return True
444 |
445 |
446 | def run_interactive_metrics(form_data, all_metrics, sw):
447 | """
448 | Match the selected boxes from the form submission to metrics and run. Save the scores and feedback
449 | :param form_data: Metrics to run (List of md5 of the description)
450 | :param all_metrics: List of the available Metrics
451 | :param sw: The Software object being tested
452 | :return:
453 | """
454 | score_ids = []
455 | for metric_id, value in form_data.items():
456 | if metric_id == "submit" or metric_id == "csrf_token":
457 | continue
458 | for metric in all_metrics:
459 | if metric.IDENTIFIER == metric_id:
460 | app.logger.info("Running metric: " + metric.SHORT_DESCRIPTION)
461 | metric.run(software=sw, form_data=value)
462 | app.logger.info(metric.get_score())
463 | app.logger.info(metric.get_feedback())
464 | score = Score(software_id=sw.id,
465 | name=metric.NAME,
466 | identifier=metric.IDENTIFIER,
467 | category=metric.CATEGORY,
468 | short_description=metric.SHORT_DESCRIPTION,
469 | long_description=metric.LONG_DESCRIPTION,
470 | interactive=metric.SELF_ASSESSMENT,
471 | value=metric.get_score(),
472 | feedback=metric.get_feedback(),
473 | category_importance=form_data['importance'],
474 | metric_importance=form_data['IMPORTANCE_'+metric_id])
475 | db.session.add(score)
476 | db.session.commit()
477 | score_ids.append(score.id)
478 | return score_ids
479 |
480 |
481 | def run_automated_metrics(form_data, metrics, sw, repos_helper):
482 | """
483 | Match the selected boxes from the form submission to metrics and run.
484 | Save the scores and feedback
485 | :param form_data: Metrics to run (List of md5 of the description)
486 | :param metrics: List of the available Metrics
487 | :param sw: The Software object being tested
488 | :param repos_helper: A RepositoryHelper object
489 | :return:
490 | """
491 | score_ids = []
492 | for metric_id in form_data:
493 | for metric in metrics:
494 | if metric.IDENTIFIER == metric_id:
495 | app.logger.info("Running metric: " + metric.SHORT_DESCRIPTION)
496 | metric.run(software=sw, helper=repos_helper)
497 | app.logger.info(metric.get_score())
498 | app.logger.info(metric.get_feedback())
499 | score = Score(software_id=sw.id,
500 | name=metric.NAME,
501 | identifier=metric.IDENTIFIER,
502 | category=metric.CATEGORY,
503 | short_description=metric.SHORT_DESCRIPTION,
504 | long_description=metric.LONG_DESCRIPTION,
505 | interactive=metric.SELF_ASSESSMENT,
506 | value=metric.get_score(),
507 | feedback=metric.get_feedback(),
508 | category_importance=form_data['importance'],
509 | metric_importance=form_data['IMPORTANCE_' + metric_id])
510 | db.session.add(score)
511 | db.session.commit()
512 | score_ids.append(score.id)
513 | return score_ids
514 |
515 |
516 | def flash_errors(form):
517 | for field, errors in form.errors.items():
518 | for error in errors:
519 | flash(u"Response required in %s " % (
520 | getattr(form, field).label.text
521 | ))
--------------------------------------------------------------------------------
/config.py.template:
--------------------------------------------------------------------------------
1 | ### Software Assessment Framework - Conifguration Properties
2 | import logging
3 |
4 | ## General properties
5 |
6 | host = '0.0.0.0'
7 | port = 5000
8 |
9 | # logging.DEBUG > logging.INFO > logging.WARNING > logging.ERROR > logging.CRITICAL
10 | log_level = logging.DEBUG
11 | log_file_name = "saf.log"
12 |
13 | ## RepositoryHelper properties
14 |
15 | # GitHubHelper
16 | # Some operations employ the GitHub API, and require a GitHub [Personal Access Token]
17 | # (https://github.com/settings/tokens) to be generated and inserted below
18 | github_api_token = ""
19 |
--------------------------------------------------------------------------------
/docs/SAF_ER.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/docs/SAF_ER.png
--------------------------------------------------------------------------------
/docs/UI Mockups/Criteria.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/docs/UI Mockups/Criteria.png
--------------------------------------------------------------------------------
/docs/UI Mockups/Results.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/docs/UI Mockups/Results.png
--------------------------------------------------------------------------------
/docs/UI Mockups/Select_Criteria.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/docs/UI Mockups/Select_Criteria.png
--------------------------------------------------------------------------------
/docs/UI Mockups/Submit.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/docs/UI Mockups/Submit.png
--------------------------------------------------------------------------------
/docs/UI Mockups/cup.xcf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/docs/UI Mockups/cup.xcf
--------------------------------------------------------------------------------
/docs/UI Mockups/rosette.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/docs/UI Mockups/rosette.png
--------------------------------------------------------------------------------
/docs/UI Mockups/saf.bmpr:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/docs/UI Mockups/saf.bmpr
--------------------------------------------------------------------------------
/docs/comparing_research_software.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/docs/comparing_research_software.pdf
--------------------------------------------------------------------------------
/docs/entities.md:
--------------------------------------------------------------------------------
1 | # Entities
2 | Notes on entities and their relationships. All of these will likely end up as Python classes, most will also be database tables.
3 |
4 | 
5 |
6 | ## Software
7 | Represents a piece of software submitted for appraisal.
8 |
9 | ## Metric
10 | Represents the property being measured. E.g. 'Has a license file', 'Percentage test coverage'. Can be grouped according to category (e.g. Availability, Usability, Maintainability, Portability). The Python class will be a plugin with a execute() method which performs the appraisal and records the outcome as Score.
11 |
12 | ## Score
13 | The outcome of running the metric for a particular piece of software.
14 |
15 | ## Appraisal
16 | A set of of metrics, weighted according to user choice, along with there scores.
17 |
18 |
--------------------------------------------------------------------------------
/docs/requirements.md:
--------------------------------------------------------------------------------
1 | # Requirements
2 |
3 | The overall goals of the Software Assessment Framework pilot are:
4 |
5 | 1. Create a framework that will enable research software to be assessed
6 | 2. Develop a subset of assessment functionality within that framework
7 | 3. Pilot the assessment framework on a selection of projects
8 |
9 | ## The Framework
10 |
11 | The framework is expected to consist of website / application that enables a user
12 | to assess a piece of software, including:
13 | * manual self-assessment via questions
14 | * automated analysis of the software
15 | * aggregate data from both manual and automated assessment to produce scores/levels
16 | * visualisation of the results of the assessment
17 |
18 | The requirements for the framework are:
19 |
20 | * Access
21 | * It must be available through a web browser
22 | * It must allow a user to log in
23 | * It could allow a user to log in using their ORCID
24 | * It could allow a user to log in using their GitHub credentials
25 | * It must allow a user to create an "assessment"
26 | * It should allow a user to save and return to an assessment
27 | * It could allow a user to share an assessment
28 | * It could allow a user to display an open badge based on the assessment
29 | * Measurement
30 | * It must allow for two basic forms of measurement: self-assessment and automated analysis
31 | * self assessment e.g. does the software have a short description?
32 | * automated analysis e.g. what is the frequency of releases?
33 | * some things may be possible via both mechanisms e.g. does the software have a license?
34 | * It must allow for new measurement functionality to be added in the future easily as "plugins"
35 | * It must enable these measures to be grouped and combined in customisable ways, to produce numerical scores or levels
36 | * It could allow users to create their own custom assessments based on all available measures
37 | * Visualisation
38 | * It must enable the user to understand the results of each assessment category
39 | * It should show whether a piece of software has passed a particular assessement "level"
40 | * It could show results as spider plots or pie segments based on the four categories
41 | * Assessment Plugins
42 | * It must provide functionality for checking if the software has a description
43 | * It must provide functionality for checking if the software has a license
44 | * It should provide functionality for recording what type of license
45 | * It must provide functionality for identifying the availability of source code
46 | * It should provide functionality for identifying if the software provides test data
47 | * It should provide functionality for identifying if the software includes tests
48 | * It could provide functionality for measuring the test coverage
49 | * It should provide functionality for identifying the level of user documentation
50 | * It should provide functionality for identifying the level of developer documentation
51 | * It should provide functionality for working out the release frequency
52 | * It should provide functionality for working out the trend in contributors
53 | * It should provide functionality for identifying examples of use
54 | * It could provide functionality for identifying citations based on a DOI (via ImpactStory or Depsy API?)
55 | * It could provise funtionality for identifying citations based on a URL
56 | * Integration
57 | * It should be able to interface with the Depsy API (e.g. to identify citations)
58 | * It could be able to interface to the Libraries.io API (e.g. to pull the Sourcerank info)
59 | * It must be able to interface to the GitHub API (e.g. to identify number of watchers, contributors, release frequency)
60 | * Programming Language
61 | * It should be written in either Python or Ruby
62 |
63 |
64 | # Related work
65 |
66 | Original prototype from CW:
67 | * https://github.com/OpenSourceHealthCheck/healthcheck
68 |
69 | Other assessment frameworks for data and software
70 | * http://depsy.org/package/python/scipy
71 | * https://libraries.io/pypi/scipy/sourcerank
72 | * https://www.openhub.net/p/scipy
73 | * http://ontosoft.org/ontology/software/
74 | * http://www.ontosoft.org/portal/#browse/Software-mZ8BhPHA5SQq
75 | * https://certificates.theodi.org/en/
76 | * http://www.datasealofapproval.org/en/
77 |
78 | Related work on software metadata:
79 | * Software Discovery Dashboard: https://github.com/mozillascience/software-discovery-dashboard
80 | * CodeMeta: https://github.com/codemeta/codemeta
81 | * Code as a Research Object: https://science.mozilla.org/projects/codemeta
82 |
83 | Repositories containing software with DOIs / identifiers
84 | * SciCrunch: https://scicrunch.org
85 | * figshare: https://figshare.com/
86 | * OSF: https://osf.io/
87 | * Zenodo: https://zenodo.org/
88 |
89 | Dependency analysis:
90 | * Depsy: http://depsy.org/
91 | * Libraries.io: https://libraries.io/
92 |
93 | Impact Analysis:
94 | * Lagotto: http://www.lagotto.io
95 | * ContentMine: http://contentmine.org/
96 | * ScholarNinja: https://github.com/ScholarNinja/software_metadata
97 |
98 | Automated code analysis:
99 | * http://coala-analyzer.org/
100 | * https://pypi.python.org/pypi/radon
101 | * https://www.openhub.net/
102 |
103 | Tools for identifying scientific software:
104 | * https://github.com/ScholarNinja/software_metadata
105 |
106 | Models for assessing code:
107 | * https://communitymodel.sharepoint.com/Pages/default.aspx
108 | * https://en.wikipedia.org/wiki/Capability_Maturity_Model_Integration
109 | * http://oss-watch.ac.uk/apps/openness/
110 |
111 | Making software more visible:
112 | * http://signposting.org/
113 |
114 |
115 | ## Models of representing success
116 |
117 | * Howison, J., Deelman, E., McLennan, M. J., Silva, R. F. da, & Herbsleb, J. D. (2015). [Understanding the scientific software ecosystem and its impact: Current and future measures](http://doi.org/10.1093/reseval/rvv014). Research Evaluation, 24(4), 454–470. http://doi.org/10.1093/reseval/rvv014
118 | * Crowston, K., Howison, J., & Annabi, H. (2006). [Information systems success in free and open source software development: Theory and measures](http://onlinelibrary.wiley.com/doi/10.1002/spip.259/abstract). Software Process: Improvement and Practice, 11(2), 148.
119 | * Delone, W., & Mclean, E. (2003). [The Delone and Mclean model of Information Systems Success: A ten year update](http://dl.acm.org/citation.cfm?id=1289767). Journal of MIS, 19(4), -30.
120 | * Subramaniam, C. et al. 2009. Determinants of open source software project success: A longitudinal study. Decision Support Systems, vol. 46, pp. 576-585.
121 | * English, R., and Schweik, C. 2007. Identifying success and abandonment of FLOSS commons: A classification of Sourceforge.net projects, Upgrade: The European Journal for the Informatics Professional VIII, vol. 6.
122 | * Wiggins, A. and Crowston, K. 2010. Reclassifying success and tragedy in FLOSS projects. Open Source Software: New Horizons, pp. 294-307.
123 | * Piggott, J. and Amrit, C. 2013. How Healthy Is My Project? Open Source Project Attributes as Indicators of Success. In Proceedings of the 9th International Conference on Open Source Systems. DOI: 10.1007/978-3-642-38928-3_3.
124 | * Gary C. Moore, Izak Benbasat. 1991. Development of an Instrument to Measure the Perceptions of Adopting an Information Technology Innovation Information Systems Research 19912:3 , 192-222
125 |
126 | ## Reusability and Maturity
127 |
128 | * Holibaugh, R et al. 1989. Reuse: where to begin and why. Proceedings of the conference on Tri-Ada '89: Ada technology in context: application, development, and deployment. p266-277. DOI: 10.1145/74261.74280.
129 | * Frazier, T.P., and Bailey, J.W. 1996. The Costs and Benefits of Domain-Oriented Software Reuse: Evidence from the STARS Demonstration Projects. Accessed on 21st July 2014 from: http://www.dtic.mil/dtic/tr/fulltext/u2/a312063.pdf
130 | * CMMI Product Team, 2006. CMMI for Development, Version 1.2. SEI Identifier: CMU/SEI-2006-TR-008.
131 | * Gardler, R. 2013. Software Sustainability Maturity Model. Accessed on 21st July 2014 from: http://oss-watch.ac.uk/resources/ssmm
132 | * NASA Earth Science Data Systems Software Reuse Working Group (2010). Reuse Readiness Levels (RRLs), Version 1.0. April 30, 2010. Accessed from: http://www.esdswg.org/softwarereuse/Resources/rrls/
133 | * Marshall, J.J., and Downs, R.R. 2008. Reuse Readiness Levels as a Measure of Software Reusability. In proceedings of Geoscience and Remote Sensing Symposium. Volume 3. P1414-1417. DOI: 10.1109/IGARSS.2008.4779626
134 | * Chue Hong, N. 2013. Five stars of research software. Accessed on 8th July 2016 from: http://www.software.ac.uk/blog/2013-04-09-five-stars-research-software
135 |
136 | ## Software Quality
137 |
138 | * Bourque, P. and Fairley, R.E. eds., (2014) Guide to the Software Engineering Body of Knowledge, Version 3.0, IEEE Computer Society http://www.swebok.org
139 | * ISO/IEC 25010:2011(en) (2011) Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — System and software quality models
140 | * Microsoft Code Analysis Team Blog. 2007. Maintainability Index Range and Meaning. Accessed on 8th July 2016 from: https://blogs.msdn.microsoft.com/codeanalysis/2007/11/20/maintainability-index-range-and-meaning/
141 | * Sjoberg, D.I.K et. al. 2012. Questioning Software Maintenance Metrics: A Comparative Case Study. In proceedings of ESEM’12. P107-110. DOI: 10.1145/2372251.2372269
142 |
143 |
144 | ## Badging
145 |
146 | * Blohowiak, B. B., Cohoon, J., de-Wit, L., Eich, E., Farach, F. J., Hasselman, F., … DeHaven, A. C. (2016, October 10). [Badges to Acknowledge Open Practices](http://osf.io/tvyxz). Retrieved from http://osf.io/tvyxz
147 |
--------------------------------------------------------------------------------
/docs/roadmap.md:
--------------------------------------------------------------------------------
1 | # Timeline
2 |
3 | * January 2017 - Design work started
4 | * late January - decision on which measures to implement
5 | * mid February 2017 - Pilot studies with selected projects from Open Call (via Steve), STFC (via Catherine Jones) and RSE community via Christopher Woods
6 | * 21st March - EPSRC SLA Meeting (SAF progress to be updated)
7 | * __23rd March - e-Infrastructure SAT Meeting__ (SAF to be demoed, and pilot studies presented)
8 | * 27-29th March - CW17 (further demos, potential hackathon projects)
9 |
--------------------------------------------------------------------------------
/plugins/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/plugins/__init__.py
--------------------------------------------------------------------------------
/plugins/metric/__init__.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | import yapsy.PluginManager
4 |
5 |
6 | def load():
7 | manager = yapsy.PluginManager.PluginManager()
8 | manager.setPluginPlaces([os.path.dirname(__file__)])
9 | manager.collectPlugins()
10 | active = [plugin.plugin_object for plugin in manager.getAllPlugins()]
11 | return active
12 |
--------------------------------------------------------------------------------
/plugins/metric/contributing.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import plugins.metric.metric as metric
3 |
4 |
5 | class ContributingMetric(metric.Metric):
6 | """
7 | Locate a CONTRIBUTING file
8 | Looks in the root of the repository, for files named: 'CONTRIBUTING',
9 | 'CONTRIBUTING.txt', 'CONTRIBUTING.md', 'CONTRIBUTING.html'
10 | Scores:
11 | 0 if no CONTRIBUTING file found
12 | 100 if CONTRIBUTING file with non-zero length contents is found
13 | """
14 | # TODO: decide which category this metric should fall into
15 |
16 | NAME = "Contribution Policy"
17 | IDENTIFIER = "uk.ac.software.saf.contributing"
18 | CATEGORY = "USABILITY"
19 | SHORT_DESCRIPTION = "Has a CONTRIBUTING file?"
20 | LONG_DESCRIPTION = "Test for the existence of CONTRIBUTING file."
21 |
22 | def run(self, software, helper):
23 | """
24 | :param software: An Software entity
25 | :param helper: A Repository Helper
26 | :return:
27 | """
28 | self.score = 0
29 | self.feedback = "A short, descriptive, CONTRIBUTING file can provide a useful first port of call for new developers."
30 | candidate_files = helper.get_files_from_root(['CONTRIBUTING', 'CONTRIBUTING.txt', 'CONTRIBUTING.md', 'CONTRIBUTING.html'])
31 | for file_name, file_contents in candidate_files.items():
32 | logging.info('Locating CONTRIBUTING')
33 | self.score = 0
34 | if file_contents is not None:
35 | self.score = 100
36 | self.feedback = "CONTRIBUTING file found"
37 | break
38 | else:
39 | self.score = 0
40 | self.feedback = "A short, descriptive, CONTRIBUTING file can provide a useful first port of call for new developers."
41 |
42 | def get_score(self):
43 | """Get the results of running the metric.
44 | :returns:
45 | 0 if no CONTRIBUTING found
46 | 100 if an identifiable CONTRIBUTING file is found
47 | """
48 | return self.score
49 |
50 | def get_feedback(self):
51 | """
52 | A few sentences describing the outcome, and providing tips if the outcome was not as expected
53 | :return:
54 | """
55 | return self.feedback
56 |
--------------------------------------------------------------------------------
/plugins/metric/contributing.yapsy-plugin:
--------------------------------------------------------------------------------
1 | [Core]
2 | Name = ContributingMetric
3 | Module = contributing
4 |
5 | [Documentation]
6 | Description = Find a CONTRIBUTING file
7 |
--------------------------------------------------------------------------------
/plugins/metric/documentation_developer.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import plugins.metric.metric as metric
3 |
4 |
5 | class DocumentationDeveloperMetric(metric.Metric):
6 | """
7 | Interactive, self-assessment metric
8 | Ask submitter about the extent of their developer documentation
9 | Scores:
10 | 0 if "No documentation"
11 | 50 if "Minimal developer documentation"
12 | 100 if "Comprehensive documentation"
13 | """
14 |
15 | NAME = "Developer Documentation"
16 | IDENTIFIER = "uk.ac.software.saf.documentation_developer"
17 | SELF_ASSESSMENT = True
18 | CATEGORY = "MAINTAINABILITY"
19 | SHORT_DESCRIPTION = "Developer Documentation?"
20 | LONG_DESCRIPTION = "Do you provide documentation for developers wanting to contribute to, extend or fix bugs in your project?"
21 |
22 | def run(self, software, helper=None, form_data=None):
23 | """
24 | The main method to run the metric.
25 | :param software: An instance of saf.software.Software
26 | :param helper: An instance of plugins.repository.helper.RepositoryHelper
27 | :param form_data: (Optional) For interactive
28 | :return:
29 | """
30 | self.score = int(form_data)
31 | if self.score == 0:
32 | self.feedback = "Without at least basic documentation, developers are unlikely to contribute new code or fix bugs."
33 | if self.score == 50:
34 | self.feedback = "Basic developer docs are a great start. Consider providing more comprehensive documentation too."
35 | if self.score == 100:
36 | self.feedback = "Great, your software has excellent documentation for developers."
37 |
38 | def get_score(self):
39 | return self.score
40 |
41 | def get_feedback(self):
42 | return self.feedback
43 |
44 | def get_ui_choices(self):
45 | return {
46 | "100": "Comprehensive developer documentation covering all aspects of architecture, contribution and extension",
47 | "50": "Minimal developer documentation only. e.g. an architectural overview",
48 | "0": "No developer docmentation"
49 | }
50 |
--------------------------------------------------------------------------------
/plugins/metric/documentation_developer.yapsy-plugin:
--------------------------------------------------------------------------------
1 | [Core]
2 | Name = DocumentationDeveloperMetric
3 | Module = documentation_developer
4 |
5 | [Documentation]
6 | Description = Ask submitter about the extent of their developer documentation
7 |
--------------------------------------------------------------------------------
/plugins/metric/documentation_user.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import plugins.metric.metric as metric
3 |
4 |
5 | class DocumentationUserMetric(metric.Metric):
6 | """
7 | Interactive, self-assessment metric
8 | Ask submitter about the extent of their end-user documentation
9 | Scores:
10 | 0 if "No documentation"
11 | 50 if "Minimal installation documentation"
12 | 100 if "Comprehensive documentation"
13 | """
14 |
15 | NAME = "End-user Documentation"
16 | IDENTIFIER = "uk.ac.software.saf.documentation_user"
17 | SELF_ASSESSMENT = True
18 | CATEGORY = "USABILITY"
19 | SHORT_DESCRIPTION = "End user documentation?"
20 | LONG_DESCRIPTION = "Do you provide end-user documentation? If so, how extensive is it?"
21 |
22 | def run(self, software, helper=None, form_data=None):
23 | """
24 | The main method to run the metric.
25 | :param software: An instance of saf.software.Software
26 | :param helper: An instance of plugins.repository.helper.RepositoryHelper
27 | :param form_data: (Optional) For interactive
28 | :return:
29 | """
30 | self.score = int(form_data)
31 | if self.score == 0:
32 | self.feedback = "Without at least basic documentation, end users are unlikely to use your software."
33 | if self.score == 50:
34 | self.feedback = "Installation instructions are a great start. Consider providing more comprehensive documentation too."
35 | if self.score == 100:
36 | self.feedback = "Great, your software is well documented."
37 |
38 | def get_score(self):
39 | return self.score
40 |
41 | def get_feedback(self):
42 | return self.feedback
43 |
44 | def get_ui_choices(self):
45 | return {
46 | "100": "Comprehensive documentation covering all aspects of installation and use",
47 | "50": "Minimal installation documentation only",
48 | "0": "No Docmentation"
49 | }
50 |
--------------------------------------------------------------------------------
/plugins/metric/documentation_user.yapsy-plugin:
--------------------------------------------------------------------------------
1 | [Core]
2 | Name = DocumentationUserMetric
3 | Module = documentation_user
4 |
5 | [Documentation]
6 | Description = Ask submitter about the extent of their end-user documentation
7 |
--------------------------------------------------------------------------------
/plugins/metric/freshness.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import plugins.metric.metric as metric
3 |
4 |
5 | class FreshnessMetric(metric.Metric):
6 | """
7 | Get a list of commits and calculate a freshness score based on this
8 | As a test, use the following.
9 | Scores:
10 | 0 if no commits found
11 | 50 if one commit found
12 | 100 if more than one commit found
13 | """
14 |
15 | NAME = "Development Activity"
16 | IDENTIFIER = "uk.ac.software.saf.freshness"
17 | SELF_ASSESSMENT = False
18 | CATEGORY = "MAINTAINABILITY"
19 | SHORT_DESCRIPTION = "Actively developed?"
20 | LONG_DESCRIPTION = "Calculate the freshness of a repository."
21 |
22 | def run(self, software, helper):
23 | """
24 | :param software: An Software entity
25 | :param helper: A Repository Helper
26 | :return:
27 | """
28 | self.score = 0
29 | list_commits = helper.get_commits()
30 |
31 | number_commits = len(list_commits)
32 |
33 | if number_commits <= 0:
34 | self.score = 0
35 | self.feedback = "This is a DEAD repository."
36 | else:
37 | if number_commits <= 1:
38 | self.score = 50
39 | self.feedback = "Good job! You've dumped your code in a repo!"
40 | else:
41 | self.score = 100
42 | self.feedback = "Excellent!!! You have an active repo!!!"
43 |
44 |
45 | def get_score(self):
46 | """
47 | Get the results of running the metric.
48 | :returns:
49 | 0 if no commits found
50 | 50 if one commit found
51 | 100 if more than one commit found
52 | """
53 | return self.score
54 |
55 | def get_feedback(self):
56 | """
57 | A few sentences describing the outcome, and providing tips if the outcome was not as expected
58 | :return:
59 | """
60 | return self.feedback
61 |
--------------------------------------------------------------------------------
/plugins/metric/freshness.yapsy-plugin:
--------------------------------------------------------------------------------
1 | [Core]
2 | Name = FreshnessMetric
3 | Module = freshness
4 |
5 | [Documentation]
6 | Description = Calculate the freshness of a repository
7 |
--------------------------------------------------------------------------------
/plugins/metric/license.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import plugins.metric.metric as metric
3 |
4 |
5 | class LicenseMetric(metric.Metric):
6 | """
7 | Locate and attempt to identify a software license file
8 | Looks in the root of the repository, for files named: 'LICENSE', 'LICENSE.txt', 'LICENSE.md', 'LICENSE.html','LICENCE', 'LICENCE.txt', 'LICENCE.md', 'LICENCE.html'
9 | Identifies LGPL, GPL, MIT, BSD, Apache 1.0/1.1/2.0
10 | Scores:
11 | 0 if no license found
12 | 50 if a license file is present but not idenitfiable
13 | 100 if an identifiable license if found
14 | """
15 | NAME = "Licensing"
16 | IDENTIFIER = "uk.ac.software.saf.license"
17 | SELF_ASSESSMENT = False
18 | CATEGORY = "MAINTAINABILITY"
19 | SHORT_DESCRIPTION = "Has a license file?"
20 | LONG_DESCRIPTION = "Test for the existence of file called 'LICENSE' and attempt to identify it from its contents."
21 |
22 | def run(self, software, helper, form_data=None):
23 | """
24 | :param software: An Software entity
25 | :param helper: A Repository Helper
26 | :param form_data: (Optional) For interactive
27 | :return:
28 | """
29 | self.score = 0
30 | self.feedback = "A license helps people to know what terms they are bound by, if any, when using and/or updating your software."
31 | candidate_files = helper.get_files_from_root(['LICENSE', 'LICENSE.txt', 'LICENSE.md', 'LICENSE.html','LICENCE', 'LICENCE.txt', 'LICENCE.md', 'LICENCE.html'])
32 | for file_name, file_contents in candidate_files.items():
33 | # Identify license
34 | # FixMe - If there is >1 file it's last man in - not desirable
35 | logging.info('Identifying license')
36 | if file_contents is not None:
37 |
38 | if "LESSER" in file_contents:
39 | license_type = "LGPL"
40 | elif "GNU GENERAL PUBLIC LICENSE" in file_contents:
41 | license_type = "GPL"
42 | elif " MIT " in file_contents:
43 | license_type = "MIT"
44 | elif "(INCLUDING NEGLIGENCE OR OTHERWISE)" in file_contents:
45 | license_type = "BSD"
46 | elif "Apache" in file_contents:
47 | if "2.0" in file_contents:
48 | license_type = "Apache 2.0"
49 | elif "1.1" in file_contents:
50 | license_type = "Apache 1.1"
51 | elif "1.0" in file_contents:
52 | license_type = "Apache 1.0"
53 | else:
54 | license_type = "Apache"
55 | else:
56 | license_type = None
57 |
58 | if license_type is not None:
59 | # License identifiable - full marks
60 | self.score = 100
61 | self.feedback = "Well done! This code is licensed under the %s license." % (license_type)
62 | break
63 | else:
64 | # License not identifiable - half marks
65 | self.score = 50
66 | self.feedback = "Well done! This code is licensed, but we can't work out what license it is using. Is it a standard open-source licence?"
67 |
68 | def get_score(self):
69 | """Get the results of running the metric.
70 | :returns:
71 | 0 if no license found
72 | 50 if a license file is present but not idenitfiable
73 | 100 if an identifiable license if found
74 | """
75 | return self.score
76 |
77 | def get_feedback(self):
78 | """
79 | A few sentences describing the outcome, and providing tips if the outcome was not as expected
80 | :return:
81 | """
82 | return self.feedback
83 |
--------------------------------------------------------------------------------
/plugins/metric/license.yapsy-plugin:
--------------------------------------------------------------------------------
1 | [Core]
2 | Name = LicenseMetric
3 | Module = license
4 |
5 | [Documentation]
6 | Description = Find identify the software license
--------------------------------------------------------------------------------
/plugins/metric/metric.py:
--------------------------------------------------------------------------------
1 | import yapsy.IPlugin
2 |
3 |
4 | class Metric(yapsy.IPlugin.IPlugin):
5 | """
6 | Base class for repository metric plugins.
7 | Metrics should be atomic, measuring one thing and providing a score 0-100.
8 | For example, the absence of a license file might result in a score of 0,
9 | If a LICENSE file is present, but not identifiable as known license, a score of 50
10 | The presence of an identifiable license results in a score of 100.
11 | """
12 |
13 | NAME = "UNSET"
14 | # Short, descriptive, human readable name for this metric - e.g. "Vitality", "License", "Freshness"
15 |
16 | IDENTIFIER = "uk.ac.software.saf.metric"
17 | # A *meaningful* unique identifier for this metric.
18 | # To avoid collisions, a reverse domain name style notation may be used, or your-nick.metric-name
19 | # The final part should coincide with the module name
20 |
21 | CATEGORY = "UNSET"
22 | # "AVAILABILITY", "USABILITY", "MAINTAINABILITY", "PORTABILITY"
23 |
24 | SHORT_DESCRIPTION = "UNSET"
25 | # A one or two sentence description of the metric, to be displayed to the user
26 | # If the metric is interactive, this will be presented to the user as the question
27 |
28 | LONG_DESCRIPTION = "UNSET"
29 | # Longer description of the metric, how it works and explanation of scoring
30 |
31 | SELF_ASSESSMENT = None
32 | # None / False - Non-interactive - Doesn't require user input beyond the repository URL
33 | # True - Interactive - Assessment is based solely on user input
34 |
35 | def run(self, software, helper=None, form_data=None):
36 | """
37 | The main method to run the metric.
38 | :param software: An instance of saf.software.Software
39 | :param helper: (Optional) An instance of plugins.repository.helper.RepositoryHelper, None if interactive
40 | :param form_input: (Optional) For interactive
41 | :return:
42 | """
43 | raise NotImplementedError("This method must be overridden")
44 |
45 | def get_score(self):
46 | """Get the results of running the metric.
47 | :returns:
48 | This should be an integer between 0 and 100
49 | """
50 | raise NotImplementedError("This method must be overridden")
51 |
52 | def get_feedback(self):
53 | """
54 | A few sentences describing the outcome, and providing tips if the outcome was not as expected
55 | :return:
56 | """
57 | raise NotImplementedError("This method must be overridden")
58 |
59 | def get_ui_choices(self):
60 | """
61 | Optional. If the metric is interactive, a set of radio buttons is generated based on this.
62 | :returns: A Dictonary {Option Value: Option label}
63 | e.g:
64 | return {
65 | "100": "Comprehensive documentation covering all aspects of installation and use",
66 | "50": "Minimal installation documentation only",
67 | "0": "No Docmentation"
68 | }
69 | The selected Value is returned to the run() method above.
70 | """
--------------------------------------------------------------------------------
/plugins/metric/readme.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import plugins.metric.metric as metric
3 |
4 |
5 | class ReadmeMetric(metric.Metric):
6 | """
7 | Locate a README file
8 | Looks in the root of the repository, for files named: 'README', 'README.txt', 'README.md', 'README.html'
9 | Scores:
10 | 0 if no README found
11 | 100 if README file with non-zero length contents is found
12 | """
13 |
14 | NAME = "README"
15 | IDENTIFIER = "uk.ac.software.saf.readme"
16 | SELF_ASSESSMENT = False
17 | CATEGORY = "USABILITY"
18 | SHORT_DESCRIPTION = "Has a README file?"
19 | LONG_DESCRIPTION = "Test for the existence of file 'README'."
20 |
21 | def run(self, software, helper):
22 | """
23 | :param software: An Software entity
24 | :param helper: A Repository Helper
25 | :return:
26 | """
27 | self.score = 0
28 | candidate_files = helper.get_files_from_root(['README', 'README.txt', 'README.md', 'README.html'])
29 | for file_name, file_contents in candidate_files.items():
30 | logging.info('Locating README')
31 | self.score = 0
32 | if file_contents is not None:
33 | self.score = 100
34 | self.feedback = "README found"
35 | break
36 | else:
37 | self.score = 0
38 | self.feedback = "A short, descriptive, README file can provide a useful first port of call for new users."
39 |
40 | def get_score(self):
41 | """Get the results of running the metric.
42 | :returns:
43 | 0 if no README found
44 | 100 if an identifiable README is found
45 | """
46 | return self.score
47 |
48 | def get_feedback(self):
49 | """
50 | A few sentences describing the outcome, and providing tips if the outcome was not as expected
51 | :return:
52 | """
53 | return self.feedback
54 |
--------------------------------------------------------------------------------
/plugins/metric/readme.yapsy-plugin:
--------------------------------------------------------------------------------
1 | [Core]
2 | Name = ReadmeMetric
3 | Module = readme
4 |
5 | [Documentation]
6 | Description = Find a README file
--------------------------------------------------------------------------------
/plugins/metric/usability/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/softwaresaved/software-assessment-framework/ce1c786fe2c1e9547d99ad9a4d629305e8a8ace0/plugins/metric/usability/__init__.py
--------------------------------------------------------------------------------
/plugins/metric/vitality.py:
--------------------------------------------------------------------------------
1 | import logging
2 | from datetime import datetime, date, time, timedelta
3 | import plugins.metric.metric as metric
4 |
5 |
6 | class VitalityMetric(metric.Metric):
7 | """
8 | Get a list of commits and calculate a vitality score based on this
9 |
10 | Scores:
11 | 0 if commiter numbers are decreasing
12 | 50 if commiter numbers are stable
13 | 100 if committer numbers are increasing
14 |
15 | This should be replaced by a score which reflects the actual trends we see.
16 | """
17 | NAME = "Vitality"
18 | IDENTIFIER = "uk.ac.software.saf.vitality"
19 | SELF_ASSESSMENT = False
20 | CATEGORY = "MAINTAINABILITY"
21 | SHORT_DESCRIPTION = "Calculate committer trend"
22 | LONG_DESCRIPTION = "Calculate the vitality of a repository."
23 | PERIOD = 180 # Comparison period in days
24 |
25 | def unique_committers_in_date_range(self, helper, start_date, end_date):
26 | """
27 | :param helper: A Repository Helper
28 | :param start_date: datetime for start of range
29 | :param end_date: datetime for end of range
30 | :return: number of unique committers to repo during that period
31 | """
32 | unique_committers = set([])
33 |
34 | try:
35 | list_commits = helper.get_commits(since=start_date, until=end_date)
36 | except Exception as e:
37 | logging.warning("Exception raised when fetching commits")
38 |
39 | # TODO: the check for a "web-flow" commiter is GitHub specific
40 | for c in list_commits:
41 | if c.committer not in unique_committers and str(c.committer) != "web-flow":
42 | unique_committers.add(c.committer)
43 |
44 | return len(unique_committers)
45 |
46 |
47 | def run(self, software, helper):
48 | """
49 | :param software: An Software entity
50 | :param helper: A Repository Helper
51 | :return:
52 | """
53 |
54 | self.score = 0
55 |
56 | # Get number of unique committers in this period
57 |
58 | end_date = datetime.utcnow()
59 | start_date = end_date - timedelta(days=self.PERIOD)
60 | current_unique_committers = self.unique_committers_in_date_range(helper, start_date, end_date)
61 |
62 | # Get number of unique committers in the previous period
63 |
64 | end_date = start_date
65 | start_date = end_date - timedelta(days=self.PERIOD)
66 | previous_unique_committers = self.unique_committers_in_date_range(helper, start_date, end_date)
67 |
68 | # Calculate a score based on comparing the number of committers
69 | # For now, use a very simplistic scoring system:
70 | # Decreasing = bad = 0
71 | # Level = okay = 50
72 | # Increasing = good = 100
73 |
74 | if current_unique_committers < previous_unique_committers:
75 | self.score = 0
76 | self.feedback = "Number of contributors is declining."
77 | elif current_unique_committers == previous_unique_committers:
78 | self.score = 50
79 | self.feedback = "Number of contributors is stable."
80 | elif current_unique_committers > previous_unique_committers:
81 | self.score = 100
82 | self.feedback = "Number of contributors is growing."
83 | else:
84 | logging.warning("Something has gone wrong in comparison")
85 |
86 | def get_score(self):
87 | """
88 | Get the results of running the metric.
89 | :returns:
90 | 0 if commiter numbers are decreasing
91 | 50 if commiter numbers are stable
92 | 100 if committer numbers are increasing
93 | """
94 | return self.score
95 |
96 | def get_feedback(self):
97 | """
98 | A few sentences describing the outcome, and providing tips if the outcome was not as expected
99 | :return:
100 | """
101 | return self.feedback
102 |
--------------------------------------------------------------------------------
/plugins/metric/vitality.yapsy-plugin:
--------------------------------------------------------------------------------
1 | [Core]
2 | Name = VitalityMetric
3 | Module = vitality
4 |
5 | [Documentation]
6 | Description = Calculate the vitality of a repository
7 |
--------------------------------------------------------------------------------
/plugins/repository/__init__.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | import yapsy.PluginManager
4 |
5 |
6 | def load():
7 | manager = yapsy.PluginManager.PluginManager()
8 | manager.setPluginPlaces([os.path.dirname(__file__)])
9 | manager.collectPlugins()
10 | active = [plugin.plugin_object for plugin in manager.getAllPlugins()]
11 | return active
12 |
--------------------------------------------------------------------------------
/plugins/repository/bitbucket.py:
--------------------------------------------------------------------------------
1 | import logging
2 |
3 | from github3 import GitHub
4 |
5 | import config
6 | from plugins.repository.helper import *
7 |
8 |
9 | class BitBucketHelper(RepositoryHelper):
10 |
11 | def __init__(self, repo_url=None):
12 | self.repo_url = repo_url
13 |
14 | def can_process(self, url):
15 | if "bitbucket.com" in url:
16 | self.repo_url = url
17 | return True
18 | else:
19 | return False
20 |
21 | def login(self):
22 | """
23 | Login using the appropriate credentials
24 | :return:
25 | """
26 | raise NotImplementedError("This method must be overridden")
27 |
28 | def get_files_from_root(self, candidate_filenames):
29 | """
30 | Given a list of candidate file names, examine the repository root, returning the file names and contents
31 | :param candidate_filenames: A list of the files of interest e.g. ['COPYING','LICENSE']
32 | :return: A Dictionary of the form {'filename':file_contents,...}
33 | """
34 | raise NotImplementedError("This method must be overridden")
--------------------------------------------------------------------------------
/plugins/repository/bitbucket.yapsy-plugin:
--------------------------------------------------------------------------------
1 | [Core]
2 | Name = BitBucketHelper
3 | Module = bitbucket
4 |
5 | [Documentation]
6 | Description = Implements RepositoryHelper methods using for BitBucket
--------------------------------------------------------------------------------
/plugins/repository/github.py:
--------------------------------------------------------------------------------
1 | import logging
2 |
3 | from github3 import GitHub
4 |
5 | import config
6 | from plugins.repository.helper import *
7 |
8 |
9 | class GitHubHelper(RepositoryHelper):
10 |
11 | def __init__(self, repo_url=None):
12 | self.repo_url = repo_url
13 | self.user_name = None
14 | self.repo_name = None
15 | self.github = None # GitHub Object from github3.py
16 | self.repo = None
17 | self.api_token = config.github_api_token
18 |
19 | def can_process(self, url):
20 | if "github.com" in url:
21 | self.repo_url = url
22 | return True
23 | else:
24 | return False
25 |
26 | def login(self):
27 | # Log in
28 | logging.info('Logging in to GitHub')
29 | try:
30 | # Try connecting with the supplied API token
31 | # ToDo: Check what actual happens when wrong credentials supplied - I suspect nothing
32 | self.github = GitHub(token=self.api_token)
33 | except:
34 | logging.warning('Login to GitHub failed')
35 |
36 | # Check supplied repo url
37 | # if 'http' in submitted_repo: we've got a full Github URL
38 | if ".git" in self.repo_url:
39 | # We're dealing with a clone URL
40 | if "git@github.com" in self.repo_url:
41 | # SSH clone URL
42 | splitted = self.repo_url.split(':')[1].split('/')
43 | self.user_name = splitted[-2]
44 | self.repo_name = splitted[-1]
45 | elif "https" in self.repo_url:
46 | # HTTPS URL
47 | splitted = self.repo_url.split('/')
48 | self.user_name = splitted[-2]
49 | self.repo_name = splitted[-1]
50 | self.repo_name = self.repo_name.replace('.git', '')
51 | else:
52 | if self.repo_url.endswith("/"):
53 | self.repo_url = self.repo_url[:-1] # Remove trailing slash
54 | splitted = self.repo_url.split('/')
55 | self.user_name = splitted[-2]
56 | self.repo_name = splitted[-1]
57 |
58 | logging.info(
59 | 'Trying to connect to repository with Username: ' + self.user_name + " / Repo: " + self.repo_name + "...")
60 |
61 | self.repo = self.github.repository(self.user_name, self.repo_name)
62 | if self.repo:
63 | logging.info("...Success")
64 | else:
65 | logging.warning("Unable to connect to selected GitHub repository - check the URL and permissions")
66 | raise RepositoryHelperRepoError("Unable to connect to selected GitHub repository - check the URL and permissions. Supply an API token if the repository is private.")
67 |
68 | def get_files_from_root(self, candidate_filenames):
69 | """
70 | Given a list of candidate file names, examine the repository root, returning the file names and contents
71 | :param candidate_filenames: A list of the files of interest e.g. ['COPYING','LICENSE']
72 | :return: A Dictionary of the form {'filename':file_contents,...}
73 | """
74 | found_files = {}
75 | # Get all files in the root directory of the repo
76 | root_files = self.repo.contents('/')
77 | root_files_iter = root_files.items()
78 | for name, contents in root_files_iter:
79 | for poss_name in candidate_filenames:
80 | if poss_name in name.upper():
81 | logging.info("Found a candidate file: " + name)
82 | found_files[name] = self.repo.contents(name).decoded.decode('UTF-8')
83 |
84 | return found_files
85 |
86 | def get_commits(self, sha=None, path=None, author=None, number=-1, etag=None, since=None, until=None):
87 | """
88 | Return a list of all commits in a repository
89 | :params:
90 | Parameters:
91 | sha (str) – (optional), sha or branch to start listing commits from
92 | path (str) – (optional), commits containing this path will be listed
93 | author (str) – (optional), GitHub login, real name, or email to filter commits by (using commit author)
94 | number (int) – (optional), number of commits to return. Default: -1 returns all commits
95 | etag (str) – (optional), ETag from a previous request to the same endpoint
96 | since (datetime or string) – (optional), Only commits after this date will be returned. This can be a datetime or an ISO8601 formatted date string.
97 | until (datetime or string) – (optional), Only commits before this date will be returned. This can be a datetime or an ISO8601 formatted date string.
98 | :return: a list of Commit
99 | """
100 |
101 | # TODO: Should investigate proper use of GitHubIterator to help ratelimiting: https://github3py.readthedocs.io/en/master/examples/iterators.html
102 |
103 | list_commits = []
104 |
105 | for c in self.repo.iter_commits(sha, path, author, number, etag, since, until):
106 | list_commits.append(c)
107 |
108 | logging.info('Retrieved ' + str(len(list_commits)) + ' commits from repository with Username: ' + self.user_name + " / Repo: " + self.repo_name + "...")
109 |
110 | return list_commits
111 |
--------------------------------------------------------------------------------
/plugins/repository/github.yapsy-plugin:
--------------------------------------------------------------------------------
1 | [Core]
2 | Name = GitHubHelper
3 | Module = github
4 |
5 | [Documentation]
6 | Description = Implements RepositoryHelper methods using the GitHub3 API
--------------------------------------------------------------------------------
/plugins/repository/helper.py:
--------------------------------------------------------------------------------
1 | import yapsy.IPlugin
2 | import logging
3 | import plugins.repository
4 |
5 |
6 | class RepositoryHelper(yapsy.IPlugin.IPlugin):
7 | """
8 | Base class for repository helper plugins
9 | """
10 |
11 | def can_process(self, url):
12 | """
13 | Parse URL string to determine if implementation can work with corresponding repository
14 | :param url: The repository URL
15 | :return: True if the helper can handle this URL. False if it can't
16 | """
17 | raise NotImplementedError("This method must be overridden")
18 |
19 | def login(self):
20 | """
21 | Login using the appropriate credentials
22 | :raises RepositoryHelperRepoError if the login fails due to bad credentials or missing repository
23 | """
24 | raise NotImplementedError("This method must be overridden")
25 |
26 | def get_files_from_root(self, candidate_filenames):
27 | """
28 | Given a list of candidate file names, examine the repository root, returning the file names and contents
29 | :param candidate_filenames: A list of the files of interest e.g. ['COPYING','LICENSE']
30 | :return: A Dictionary of the form {'filename':file_contents,...}
31 | """
32 | raise NotImplementedError("This method must be overridden")
33 |
34 | def get_commits(self, sha=None, path=None, author=None, number=-1, etag=None, since=None, until=None):
35 | """
36 | Return a list of all commits in a repository
37 | :params:
38 | Parameters:
39 | sha (str) – (optional), sha or branch to start listing commits from
40 | path (str) – (optional), commits containing this path will be listed
41 | author (str) – (optional), GitHub login, real name, or email to filter commits by (using commit author)
42 | number (int) – (optional), number of commits to return. Default: -1 returns all commits
43 | etag (str) – (optional), ETag from a previous request to the same endpoint
44 | since (datetime or string) – (optional), Only commits after this date will be returned. This can be a datetime or an ISO8601 formatted date string.
45 | until (datetime or string) – (optional), Only commits before this date will be returned. This can be a datetime or an ISO8601 formatted date string.
46 | :return: a list of Commit
47 | """
48 | raise NotImplementedError("This method must be overridden")
49 |
50 |
51 | def find_repository_helper(url):
52 | """
53 | Use the plugin system to find an implementation of RepositoryHelper that is able to communicate with the repository
54 | :param url: The repository URL
55 | :return: An implementation of RepositoryHelper
56 | """
57 | logging.info("Finding a repository helper plugin to handle URL:"+url)
58 | repository_helpers = plugins.repository.load()
59 | for helper in repository_helpers:
60 | if helper.can_process(url):
61 | logging.info(helper.__class__.__name__ + " - can handle that URL")
62 | return helper
63 | else:
64 | logging.debug(helper.__class__.__name__ + " - can't handle that URL")
65 |
66 | logging.warning("Unable to identidy a repository helper")
67 | return None
68 |
69 | # Exceptions
70 |
71 |
72 | class RepositoryHelperError(Exception):
73 | """Base class for exceptions in this module."""
74 |
75 | def __init__(self, message):
76 | self.message = message
77 |
78 |
79 | class RepositoryHelperRepoError(RepositoryHelperError):
80 | """Exception raised for errors accessing the given repository
81 |
82 | Attributes:
83 | message -- explanation of the error
84 | """
85 |
86 |
87 | class Commit:
88 | """Base data structure for handling commits"""
89 |
90 | def __init__(self, commitid, committer, timestamp):
91 | self.commitid = commitid
92 | self.committer = committer
93 | self.timestamp = timestamp
94 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | click==6.7
2 | dominate==2.3.1
3 | Flask==0.12.4
4 | Flask-Bootstrap==3.3.7.1
5 | Flask-SQLAlchemy==2.1
6 | Flask-WTF==0.14.2
7 | github3.py==0.9.6
8 | itsdangerous==0.24
9 | Jinja2==2.10.1
10 | MarkupSafe==0.23
11 | requests==2.20.0
12 | SQLAlchemy==1.1.5
13 | uritemplate==3.0.0
14 | uritemplate.py==3.0.2
15 | visitor==0.1.3
16 | Werkzeug==0.11.15
17 | WTForms==2.1
18 | Yapsy==1.11.223
19 |
--------------------------------------------------------------------------------
/run.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import config
3 | from app import app
4 |
5 |
6 | # Run this to start the webapp
7 |
8 | def main():
9 | # Set up logging
10 | logging.basicConfig(format='%(asctime)s - %(levelname)s %(funcName)s() - %(message)s',
11 | filename=config.log_file_name,
12 | level=config.log_level)
13 | logging.info('-- Starting the Software Assessment Framework --')
14 |
15 | app.config['WTF_CSRF_ENABLED'] = False
16 | app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = True
17 | app.run(host=config.host, port=config.port, debug=True)
18 |
19 | if __name__ == '__main__':
20 | main()
21 |
--------------------------------------------------------------------------------