8 |
9 | Search your local government websites to see how they stack up!
10 |
11 |
12 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 | 80
28 | %
29 | of municipal
30 | sites are not
31 | secure
32 |
33 |
34 |
42 |
43 |
44 |
45 |
46 |
47 |
When our physical infrastructure — from bridges to sidewalks to playgrounds —isn’t up to shape, we can often see it, and the consequences are dire. Digital infrastructure can be just as important to maintain, and GovLens provides an easy way to see how your community is doing.
48 |
<
49 |
50 |
51 |
52 |
53 |
HTTPS means that your traffic is encrypted so only you and the website you are visiting know what you are communicating. This isn’t fool-proof security, but it is a basic step to ensure that it’s harder for people to eavesdrop on what you are reading, intercept messages you send to government agencies, and secretly modify what you see.
54 |
55 |
56 |
57 |
58 |
59 |
60 | We want to help agency staff, the press, and the general public better understand the state of government web security while also providing tips on how things can be improved — even on tight budgets. Look up your city or an agency in the search box above, see how it scores, and learn more about potential ways to keep improving.
61 |
62 |
63 |
64 |
65 |
66 | {% endblock content %}
67 |
--------------------------------------------------------------------------------
/scrapers/scrapers/accessibility_scraper.py:
--------------------------------------------------------------------------------
1 | from .base_scraper import BaseScraper
2 | from ..lighthouse import PageInsightsClient
3 |
4 |
5 | class AccessibilityScraper(BaseScraper):
6 | def __init__(self, raw_page_content, url):
7 | self.page = raw_page_content
8 | self.url = url
9 | self.apiClient = PageInsightsClient()
10 |
11 | def get_website_accessibility_info(self):
12 | return {
13 | "mobile_friendly": self.get_mobile_friendliness(),
14 | "page_speed": self.get_page_speed(),
15 | "performance": self.get_site_performance(),
16 | "multi_lingual": self.get_multi_lingual(),
17 | }
18 |
19 | def get_multi_lingual(self):
20 | is_criteria_met = (
21 | True
22 | if (
23 | (
24 | "translate"
25 | or "select language"
26 | or "select-language" in self.page.text.lower()
27 | )
28 | or ("espanol" or "Español") in self.page.a
29 | )
30 | else False
31 | )
32 | return self.get_criteria_object(None, is_criteria_met)
33 |
34 | def get_site_performance(self):
35 | try:
36 | lighthouse_results = self.apiClient.get_page_insights(
37 | self.url, "performance"
38 | ).content["lighthouseResult"]
39 | performanceResults = lighthouse_results["categories"]["performance"][
40 | "score"
41 | ]
42 | is_criteria_met = (
43 | True if performanceResults * 100 >= 80 else False
44 | ) # the score in the Json file is a percentage
45 | return self.get_criteria_object(performanceResults, is_criteria_met)
46 | except Exception:
47 | print("Error in get_site_performance for", self.url)
48 |
49 | def get_mobile_friendliness(self):
50 | try:
51 | lighthouse_results = self.apiClient.get_page_insights(
52 | self.url, "pwa"
53 | ).content["lighthouseResult"]
54 | # If the width of your app's content doesn't match the width of the viewport, your app might not be optimized for mobile screens.
55 | score = lighthouse_results["audits"]["content-width"]["score"]
56 | title = lighthouse_results["audits"]["content-width"]["title"]
57 | is_criteria_met = (
58 | True
59 | if title == "Content is sized correctly for the viewport"
60 | else False
61 | )
62 | return self.get_criteria_object(score, is_criteria_met)
63 | except Exception:
64 | print("Error in get_mobile_friendliness for", self.url)
65 |
66 | def get_page_speed(self):
67 | try:
68 | lighthouse_results = self.apiClient.get_page_insights(
69 | self.url, "performance"
70 | ).content["lighthouseResult"]
71 | speed_index = lighthouse_results["audits"]["speed-index"]["score"]
72 | is_criteria_met = (
73 | True if speed_index * 100 >= 80 else False
74 | ) # the score in the Json file is a percentage
75 | return self.get_criteria_object(speed_index, is_criteria_met)
76 | except Exception:
77 | print("Error in get_page_speed for", self.url)
78 |
--------------------------------------------------------------------------------
/scrapers/README.rst:
--------------------------------------------------------------------------------
1 | ``scrapers``
2 | ------------
3 |
4 | Description
5 | ===========
6 | Code related to scripts, "scrapers", which scrape the agency information and post the information to the Django API.
7 |
8 | Directory Structure
9 | ===================
10 |
11 | ::
12 |
13 | ├── agency_api_service.py - connects to GovLens API for agency info
14 | ├── agency_dataaccessor.py - read/write to/from database containing scraped info
15 | ├── lighthouse.py - connects to Google Lighthouse API
16 | ├── process_agency_info.py - connects to an agency site & runs scrapers
17 | ├── README.rst - this file!
18 | ├── scrape_handler.py - **Start here!** Starts API services and maps to agency processors.
19 | ├── urls.json - list of URLS pointing to government sites
20 | ├── data/
21 | │ └── agencies.csv - spreadsheet containing scraped information (match of Google Sheets?)
22 | └── scrapers/
23 | ├── __init__.py
24 | ├── accessibility_scraper.py - scrapes for multi-language, performance, mobile-bility
25 | ├── base_api_client.py
26 | ├── base_scraper.py - base class for scrapers to inherit
27 | ├── security_scraper.py - scrapes for HTTPS & privacy policy
28 | └── social_scraper.py - scrapes for phone number, email, address, social media
29 |
30 | Quick Start
31 | ===========
32 |
33 | Configuration
34 | ~~~~~~~~~~~~~
35 |
36 | There are a few required environmental variables. The easiest way to set them in development is to create a file called `.env` in the root directory of this repository (don't commit this file). The file (named `.env`) should contain the following text::
37 |
38 | GOVLENS_API_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
39 | GOVLENS_API_ENDPOINT=http://127.0.0.1:8000/api/agencies/
40 | GOOGLE_API_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXX
41 |
42 | To get the ``GOOGLE_API_TOKEN``, you need to visit the following page: https://developers.google.com/speed/docs/insights/v5/get-started
43 |
44 | To get the ``GOVLENS_API_TOKEN``, run ``python3 manage.py create_scraper_user``. Copy the token from the command output and paste it into the ``.env`` file.
45 |
46 | Execution
47 | ~~~~~~~~~
48 |
49 | Once you have created the `.env` file as mentioned above, run the scraper::
50 |
51 | # run the following from the root directory of the repository
52 | python3 -m scrapers.scrape_handler
53 |
54 | Design
55 | ======
56 |
57 | The scraper is intended to be used both locally and on AWS Lambda.
58 |
59 | The ``scrapers`` directory in the root of this repository is the top-level Python package for this project. This means that any absolute imports should begin with ``scrapers.MODULE_NAME_HERE``.
60 |
61 | ``scrapers/scrape_handler.py`` is the main Python module invoked. On AWS Lambda, the method ``scrape_handler.scrape_data()`` is imported and called directly.
62 |
63 | AWS Lambda
64 | ~~~~~~~~~~
65 | Pushing it to AWS lambda:
66 |
67 | 1. zip the ``scraper/`` folder.
68 | 2. go to AWS lamba and upload the zipped folder: https://console.aws.amazon.com/lambda/home?region=us-east-1#/functions
69 | 3. test the lambda by using this json (??)
70 | 4. confirm that there are no errors by looking at cloudwatch logs: https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#logStream:group=/aws/lambda/scrapers;streamFilter=typeLogStreamPrefix
71 |
--------------------------------------------------------------------------------
/apps/civic_pulse/templates/agency-list.html:
--------------------------------------------------------------------------------
1 | {% extends 'base.html' %}
2 | {% load static %}
3 | {% block css %}
4 |
5 | {% endblock css %}
6 | {% block content %}
7 |
8 |
35 |
36 | {% if agencies %}
37 |
38 |
39 |
40 |
41 | {% for agency in agencies %}
42 |
43 |
44 |
45 |
{{ agency.name }}
46 |
47 |
48 |
51 |
52 |
53 | {% endfor %}
54 |
55 |
56 |
57 | {% else %}
58 |
No agencies available.
59 | {% endif %}
60 |
61 | {% if is_paginated %}
62 |
81 | {% endif %}
82 |
83 |
84 | {% endblock content %}
--------------------------------------------------------------------------------
/config/settings/settings.py:
--------------------------------------------------------------------------------
1 | """
2 | Django settings for civic_pulse project.
3 |
4 | Generated by 'django-admin startproject' using Django 1.11.1.
5 |
6 | For more information on this file, see
7 | https://docs.djangoproject.com/en/1.11/topics/settings/
8 |
9 | For the full list of settings and their values, see
10 | https://docs.djangoproject.com/en/1.11/ref/settings/
11 | """
12 |
13 | import os
14 |
15 | # the project directory is obtained by finding the directory path two levels
16 | # up the directory of this file. dirname is to get the directory, and then
17 | # join and normpath functions generate and convert the path to the correct
18 | # project directory
19 | BASE_DIR = os.path.normpath(
20 | os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", "..")
21 | )
22 |
23 |
24 | # Quick-start development settings - unsuitable for production
25 | # See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
26 |
27 | # SECURITY WARNING: keep the secret key used in production secret!
28 | SECRET_KEY = "=wc@ng&5%_1)sbj&&wuq8$oy5a#1k%^3qqkam@x%9*6k57t30)"
29 |
30 | # SECURITY WARNING: don't run with debug turned on in production!
31 | DEBUG = True
32 |
33 | ALLOWED_HOSTS = [
34 | "app",
35 | "localhost",
36 | "0.0.0.0",
37 | "127.0.0.1",
38 | "civicpulse-env.4bvxnwhus8.us-east-1.elasticbeanstalk.com",
39 | ]
40 |
41 | # Database
42 | # https://docs.djangoproject.com/en/1.11/ref/settings/#databases
43 |
44 | """
45 | Elastic beanstalk automatically adds database environment variables.
46 | By checking for environment variable 'RDS_DB_NAME' we can determine whether this is
47 | development or production environment.
48 | """
49 | if "RDS_DB_NAME" in os.environ:
50 | DATABASES = {
51 | "default": {
52 | "ENGINE": "django.db.backends.postgresql_psycopg2",
53 | "NAME": os.environ["RDS_DB_NAME"],
54 | "USER": os.environ["RDS_USERNAME"],
55 | "PASSWORD": os.environ["RDS_PASSWORD"],
56 | "HOST": os.environ["RDS_HOSTNAME"],
57 | "PORT": os.environ["RDS_PORT"],
58 | }
59 | }
60 | else:
61 | DATABASES = {
62 | "default": {
63 | "ENGINE": "django.db.backends.sqlite3",
64 | "NAME": os.path.join(BASE_DIR, "db.sqlite3"),
65 | }
66 | }
67 |
68 |
69 | # Application definition
70 |
71 | INSTALLED_APPS = [
72 | "apps.civic_pulse",
73 | "rest_framework",
74 | "rest_framework.authtoken",
75 | "django.contrib.admin",
76 | "django.contrib.auth",
77 | "django.contrib.contenttypes",
78 | "django.contrib.sessions",
79 | "django.contrib.messages",
80 | "django.contrib.staticfiles",
81 | ]
82 |
83 | MIDDLEWARE = [
84 | "django.middleware.security.SecurityMiddleware",
85 | "django.contrib.sessions.middleware.SessionMiddleware",
86 | "django.middleware.common.CommonMiddleware",
87 | "django.middleware.csrf.CsrfViewMiddleware",
88 | "django.contrib.auth.middleware.AuthenticationMiddleware",
89 | "django.contrib.messages.middleware.MessageMiddleware",
90 | "django.middleware.clickjacking.XFrameOptionsMiddleware",
91 | ]
92 |
93 | ROOT_URLCONF = "config.urls"
94 |
95 | TEMPLATES = [
96 | {
97 | "BACKEND": "django.template.backends.django.DjangoTemplates",
98 | "DIRS": [os.path.join(BASE_DIR, "../../templates")],
99 | "APP_DIRS": True,
100 | "OPTIONS": {
101 | "context_processors": [
102 | "django.template.context_processors.debug",
103 | "django.template.context_processors.request",
104 | "django.contrib.auth.context_processors.auth",
105 | "django.contrib.messages.context_processors.messages",
106 | ],
107 | },
108 | },
109 | ]
110 |
111 | WSGI_APPLICATION = "config.wsgi.application"
112 |
113 |
114 | # Password validation
115 | # https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators
116 |
117 | AUTH_PASSWORD_VALIDATORS = [
118 | {
119 | "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator"
120 | },
121 | {"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator"},
122 | {"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator"},
123 | {"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator"},
124 | ]
125 |
126 |
127 | REST_FRAMEWORK = {
128 | "DEFAULT_AUTHENTICATION_CLASSES": (
129 | "rest_framework.authentication.TokenAuthentication",
130 | ),
131 | "DEFAULT_PERMISSION_CLASSES": (
132 | "rest_framework.permissions.IsAuthenticatedOrReadOnly",
133 | ),
134 | }
135 |
136 | # Internationalization
137 | # https://docs.djangoproject.com/en/1.11/topics/i18n/
138 |
139 | LANGUAGE_CODE = "en-us"
140 |
141 | TIME_ZONE = "UTC"
142 |
143 | USE_I18N = True
144 |
145 | USE_L10N = True
146 |
147 | USE_TZ = True
148 |
149 |
150 | # Static files (CSS, JavaScript, Images)
151 | # https://docs.djangoproject.com/en/1.11/howto/static-files/
152 |
153 | STATIC_URL = "/static/"
154 |
155 | STATIC_ROOT = os.path.join(BASE_DIR, "apps/static")
156 |
157 | GOOGLE_API_KEY = os.environ.get("GOOGLE_API_KEY", "")
158 |
--------------------------------------------------------------------------------
/scheduler/src/biz/scheduler.py:
--------------------------------------------------------------------------------
1 | from datetime import datetime, timedelta
2 | import os
3 | import math
4 | import queue
5 | import json
6 | import asyncio
7 | from apscheduler.schedulers.blocking import BlockingScheduler
8 | from apscheduler.schedulers.background import BackgroundScheduler
9 | from .agency_api_service import AgencyApiService
10 | from .scrape_data import ScraperService
11 |
12 |
13 | class Scheduler:
14 | def __init__(self):
15 | self.queue_size = 0
16 | self.job_execution_counter = 0
17 | self.scraper_service = ScraperService()
18 |
19 | def read_settings(self):
20 | data = {}
21 | with open(
22 | os.path.dirname(os.path.abspath(__file__)) + "/job_config.json", "r"
23 | ) as f:
24 | data = json.load(f)
25 | self.agency_list_size = data["agency_batch_size"]
26 | self.job_trigger_settings = data["job_trigger_settings"]
27 | self.interval_between_runs_seconds = data["interval_between_runs_seconds"]
28 | # there is an option to pass all these variables at run time using environment variables
29 | if os.environ.get("day", None) is not None:
30 | self.read_settings_from_environment_variables()
31 |
32 | def read_settings_from_environment_variables(self):
33 | if os.environ.get("agency_batch_size", None) is not None:
34 | self.agency_list_size = int(os.environ.get("agency_batch_size"))
35 | else:
36 | self.agency_list_size = 4
37 | if os.environ.get("interval_between_runs_seconds", None) is not None:
38 | self.interval_between_runs_seconds = int(
39 | os.environ.get("interval_between_runs_seconds")
40 | )
41 | else:
42 | self.interval_between_runs_seconds = 20
43 | if os.environ.get("day", None) is not None:
44 | print(os.environ.get("day"))
45 | print(os.environ.get("hour"))
46 | print(os.environ.get("minute"))
47 | print(os.environ.get("second"))
48 | self.job_trigger_settings["day_of_job"] = os.environ.get("day")
49 | else:
50 | raise Exception("day of job is not specified in the environment variable")
51 | if os.environ.get("hour", None) is not None:
52 | self.job_trigger_settings["hour"] = os.environ.get("hour")
53 | else:
54 | raise Exception("hour is not specified in the environment variable")
55 | if os.environ.get("minute", None) is not None:
56 | self.job_trigger_settings["minute"] = os.environ.get("minute")
57 | else:
58 | raise Exception("minute is not specified in the environment variable")
59 | if os.environ.get("second", None) is not None:
60 | self.job_trigger_settings["second"] = os.environ.get("second")
61 | else:
62 | raise Exception("second is not specified in the environment variable")
63 |
64 | def scheduled_method(self):
65 | print(
66 | f"Started scraping the agency info at {str(datetime.now().strftime('%Y-%m-%d %H:%M:%S'))}"
67 | )
68 | agency_api_service = AgencyApiService()
69 | agency_list = agency_api_service._get()
70 | self.queue_size = math.ceil(len(agency_list) / self.agency_list_size)
71 | self.job_queue = queue.Queue(maxsize=self.queue_size)
72 | self.split_data_into_chunks(agency_list)
73 | self.scrape_scheduled_method()
74 |
75 | def scrape_scheduled_method(self):
76 | self.job_execution_counter = self.job_execution_counter + 1
77 | print(
78 | f"Executing the {self.job_execution_counter} job. {self.queue_size - self.job_execution_counter} to be executed at {str(datetime.now().strftime('%Y-%m-%d %H:%M:%S'))}"
79 | )
80 | if self.job_queue.empty() is False:
81 | agencies = self.job_queue.get()
82 | loop = asyncio.new_event_loop()
83 | asyncio.set_event_loop(loop)
84 | loop.run_until_complete(self.scraper_service.scrape_data(agencies))
85 | scheduler = BlockingScheduler()
86 | scheduler.add_job(
87 | self.scrape_scheduled_method,
88 | next_run_time=datetime.now()
89 | + timedelta(seconds=self.interval_between_runs_seconds),
90 | )
91 | scheduler.start()
92 | else:
93 | print(
94 | f"done with scraping at {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}"
95 | )
96 |
97 | def reset_schedule_parameters(self):
98 | self.queue_size = 0
99 | self.job_queue = None
100 | print(
101 | f"Done Scraping the data for the agencies at {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}"
102 | )
103 |
104 | def scrape_websites(self):
105 | scheduler = BackgroundScheduler()
106 | scheduler.add_job(
107 | self.scheduled_method,
108 | "cron",
109 | day_of_week=self.job_trigger_settings["day_of_job"],
110 | hour=self.job_trigger_settings["hour"],
111 | minute=self.job_trigger_settings["minute"],
112 | second=self.job_trigger_settings["second"],
113 | )
114 | try:
115 | scheduler.start()
116 | except (KeyboardInterrupt, SystemExit):
117 | pass
118 |
119 | def split_data_into_chunks(self, agencies):
120 | for i in range(0, len(agencies), self.agency_list_size):
121 | self.job_queue.put(agencies[i : i + self.agency_list_size])
122 |
--------------------------------------------------------------------------------
/scrapers/scrapers/social_scraper.py:
--------------------------------------------------------------------------------
1 | import requests
2 | from bs4 import BeautifulSoup
3 | from .base_scraper import BaseScraper
4 | import re
5 | import logging
6 |
7 | logger = logging.getLogger(__name__)
8 |
9 |
10 | class SocialScraper(BaseScraper):
11 |
12 | phone_regex = re.compile(r"((?:\d{3}|\(\d{3}\))?(?:\s|-|\.)?\d{3}(?:\s|-|\.)\d{4})")
13 | email_regex = re.compile(r"[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,3}")
14 | address_regex = re.compile(
15 | r"\s\d{1,5} [a-zA-Z0-9\s\.,]+[A-Z]{2}[\s\-\s]{0,3}[0-9]{5,6}?[,]*"
16 | )
17 |
18 | def __init__(self, raw_page_content, url):
19 | self.raw_content = raw_page_content
20 | self.url = url
21 |
22 | def scrape_info(self):
23 | soup = BeautifulSoup(self.raw_content.content, "html.parser")
24 | social_media_criteria = [
25 | "twitter.com",
26 | "facebook.com",
27 | "instagram.com",
28 | "youtube.com",
29 | "linkedin.com",
30 | ]
31 | a_tags = soup.findAll("a", href=True)
32 | social_media_links = []
33 | contact_us_link = ""
34 | try:
35 | for tag in a_tags:
36 | try:
37 | href_link = tag.get("href", None)
38 | if href_link is not None:
39 | if "contact" in tag.text.lower():
40 | contact_us_link = tag
41 | elif any(link in tag["href"] for link in social_media_criteria):
42 | social_media_links.append(tag["href"])
43 | except Exception as ex:
44 | logging.error(
45 | ex,
46 | "An error occurred while trying to extract the social media information",
47 | )
48 | if contact_us_link:
49 | if "http" in contact_us_link["href"]:
50 | logger.info(
51 | f"making an extra call to get the contact info: {contact_us_link['href']}"
52 | )
53 | contact_us_page = requests.get(contact_us_link["href"])
54 | else:
55 | logger.info(
56 | f"making an extra call to get the contact info: {self.url+contact_us_link['href']}"
57 | )
58 | contact_us_page = requests.get(self.url + contact_us_link["href"])
59 | contact_us_soup = BeautifulSoup(contact_us_page.content, "html.parser")
60 | contact_info = self.get_contact_info(contact_us_soup)
61 | else:
62 | logger.info("not making an extra call to get the contact info")
63 | contact_info = self.get_contact_info(soup)
64 | except Exception as ex:
65 | logging.error(
66 | ex, f"An error occurred while processing the social media information"
67 | )
68 |
69 | return social_media_links, contact_info
70 |
71 | def get_contact_info(self, soup):
72 | try:
73 | contact_us_all_elements = soup.findAll()
74 | contact_us_str = ""
75 | emails = []
76 | phone_numbers = []
77 | address = []
78 | for element in contact_us_all_elements:
79 | if "contact" in element.text.lower():
80 | contact_us_str = element.text.replace("\n", " ")
81 | contact_us_str = re.sub("<[^<]+?>", "", contact_us_str)
82 | emails = (
83 | SocialScraper.email_regex.findall(contact_us_str)
84 | if not emails
85 | else emails
86 | )
87 | phone_numbers = (
88 | SocialScraper.phone_regex.findall(contact_us_str)
89 | if not phone_numbers
90 | else phone_numbers
91 | )
92 | address = (
93 | SocialScraper.address_regex.findall(contact_us_str)
94 | if not address
95 | else address
96 | )
97 |
98 | all_contact_info = {}
99 | if contact_us_str:
100 | all_contact_info = {
101 | "email": list(set(emails)),
102 | "phone_number": list(set(phone_numbers)),
103 | "address": list(set(address))[0] if address else [],
104 | }
105 | else:
106 | logger.warning("Contact Information not available")
107 | all_contact_info = {"email": [], "phone_number": [], "address": []}
108 | return all_contact_info
109 | except Exception as ex:
110 | logging.error(
111 | ex,
112 | "An error occurred while extracting the contact information for the firm {self.url}",
113 | )
114 | return None
115 |
116 | def get_outreach_communication_info(self, social_media_info, contact_info):
117 | agency_info = {
118 | "social_media_access": self.get_socialmedia_access(social_media_info),
119 | "contact_access": self.get_contact_access(contact_info),
120 | }
121 | return agency_info
122 |
123 | def get_contact_access(self, contact_info):
124 | is_contact_info_available = False
125 | if (
126 | contact_info
127 | and contact_info["phone_number"]
128 | or contact_info["email"]
129 | or contact_info["address"]
130 | ):
131 | is_contact_info_available = True
132 | return self.get_criteria_object(contact_info, is_contact_info_available)
133 |
134 | def get_socialmedia_access(self, social_media_info):
135 | is_criteria_met = (
136 | True if social_media_info and len(social_media_info) > 0 else False
137 | )
138 | return self.get_criteria_object(social_media_info, is_criteria_met)
139 |
--------------------------------------------------------------------------------
/apps/civic_pulse/static/styles.css:
--------------------------------------------------------------------------------
1 | .primary-color {
2 | color: #2E2757;
3 | }
4 |
5 | .primary-background {
6 | background-color: #2E2757;
7 | }
8 |
9 | .secondary-color {
10 | color: #0F8EC5;
11 | }
12 |
13 | .secondary-background {
14 | background-color: #0F8EC5;
15 | }
16 |
17 | body {
18 | font: 400 15px Lato, sans-serif;
19 | line-height: 1.8;
20 | color: #818181;
21 | }
22 |
23 | h2 {
24 | font-size: 24px;
25 | text-transform: uppercase;
26 | color: #303030;
27 | font-weight: 600;
28 | margin-bottom: 30px;
29 | }
30 | h4 {
31 | font-size: 19px;
32 | line-height: 1.375em;
33 | color: #303030;
34 | font-weight: 400;
35 | margin-bottom: 30px;
36 | }
37 |
38 | .tagline {
39 | padding-top: 3vh;
40 | font-weight: 600;
41 | color: white;
42 | text-align: center;
43 | }
44 |
45 | .jumbotron {
46 | background-color: #ff0000;
47 | background-image: url('images/map_boston.png');
48 | background-position: center center;
49 | background-size: cover;
50 | background-repeat: no-repeat;
51 | color: black;
52 | padding: 100px 25px;
53 | font-family: Montserrat, sans-serif;
54 | font-weight: bold !important;
55 | text-shadow: 1px 1px 5px white;
56 | }
57 | .container-fluid {
58 | padding: 60px 50px;
59 | }
60 | .bg-grey {
61 | background-color: #f6f6f6;
62 | }
63 | .logo-small {
64 | color: #f4511e;
65 | font-size: 50px;
66 | }
67 | .logo {
68 | color: #f4511e;
69 | font-size: 200px;
70 | }
71 | .thumbnail {
72 | padding: 0 0 15px 0;
73 | border: none;
74 | border-radius: 0;
75 | }
76 | .thumbnail img {
77 | width: 100%;
78 | height: 100%;
79 | margin-bottom: 10px;
80 | }
81 | .carousel-control.right,
82 | .carousel-control.left {
83 | background-image: none;
84 | color: #f4511e;
85 | }
86 | .carousel-indicators li {
87 | border-color: #f4511e;
88 | }
89 | .carousel-indicators li.active {
90 | background-color: #f4511e;
91 | }
92 | .item h4 {
93 | font-size: 19px;
94 | line-height: 1.375em;
95 | font-weight: 400;
96 | font-style: italic;
97 | margin: 70px 0;
98 | }
99 | .item span {
100 | font-style: normal;
101 | }
102 | .panel {
103 | border: 1px solid #f4511e;
104 | border-radius: 0 !important;
105 | transition: box-shadow 0.5s;
106 | }
107 | .panel:hover {
108 | box-shadow: 5px 0px 40px rgba(0, 0, 0, 0.2);
109 | }
110 | .panel-footer .btn:hover {
111 | border: 1px solid #f4511e;
112 | background-color: #fff !important;
113 | color: #f4511e;
114 | }
115 | .panel-heading {
116 | color: #fff !important;
117 | background-color: #e60000 !important;
118 | padding: 25px;
119 | border-bottom: 1px solid transparent;
120 | border-top-left-radius: 0px;
121 | border-top-right-radius: 0px;
122 | border-bottom-left-radius: 0px;
123 | border-bottom-right-radius: 0px;
124 | }
125 | .panel-footer {
126 | background-color: white !important;
127 | }
128 | .panel-footer h3 {
129 | font-size: 32px;
130 | }
131 | .panel-footer h4 {
132 | color: #aaa;
133 | font-size: 14px;
134 | }
135 | .panel-footer .btn {
136 | margin: 15px 0;
137 | background-color: #f4511e;
138 | color: #fff;
139 | }
140 |
141 | /* total height minus navbar */
142 | .homebody {
143 | height: calc(100vh - 120px);
144 | background: url('images/civicSquare.png') no-repeat fixed;
145 | background-position: center center;
146 | -webkit-background-size: cover;
147 | -moz-background-size: cover;
148 | background-size: cover;
149 | -o-background-size: cover;
150 |
151 | }
152 |
153 | .navbar {
154 | height: 120px;
155 | margin-bottom: 0;
156 | /*background-color: #e60000; */
157 | z-index: 9999;
158 | border: 0;
159 | font-size: 12px !important;
160 | line-height: 1.42857143 !important;
161 | letter-spacing: 4px;
162 | border-radius: 0;
163 | font-family: Montserrat, sans-serif;
164 | }
165 | .navbar li a,
166 | .navbar .navbar-brand {
167 | color: #fff !important;
168 | }
169 | .navbar-nav li a:hover,
170 | .navbar-nav li.active a {
171 | background-color: #ff0000 !important;
172 | }
173 | .navbar-default .navbar-toggle {
174 | border-color: transparent;
175 | color: #fff !important;
176 | }
177 | footer .glyphicon {
178 | font-size: 20px;
179 | margin-bottom: 20px;
180 | color: #f4511e;
181 | }
182 | .slideanim {
183 | visibility: hidden;
184 | }
185 | .slide {
186 | animation-name: slide;
187 | -webkit-animation-name: slide;
188 | animation-duration: 1s;
189 | -webkit-animation-duration: 1s;
190 | visibility: visible;
191 | }
192 | .score {
193 | display: block;
194 | font-size: 1.3em;
195 | font-style: oblique;
196 | }
197 | .text-green {
198 | color: #006f6f;
199 | }
200 | .text-red {
201 | color: #b90000;
202 | }
203 | @keyframes slide {
204 | 0% {
205 | opacity: 0;
206 | transform: translateY(70%);
207 | }
208 | 100% {
209 | opacity: 1;
210 | transform: translateY(0%);
211 | }
212 | }
213 | @-webkit-keyframes slide {
214 | 0% {
215 | opacity: 0;
216 | -webkit-transform: translateY(70%);
217 | }
218 | 100% {
219 | opacity: 1;
220 | -webkit-transform: translateY(0%);
221 | }
222 | }
223 | @media screen and (max-width: 768px) {
224 | .col-sm-4 {
225 | text-align: center;
226 | margin: 25px 0;
227 | }
228 | .btn-lg {
229 | width: 100%;
230 | margin-bottom: 35px;
231 | }
232 | }
233 | @media screen and (max-width: 480px) {
234 | .logo {
235 | font-size: 150px;
236 | }
237 | }
238 |
239 | .agency-list {
240 | display: inline-block;
241 | }
242 |
243 | .agency-list li {
244 | display: inline-block;
245 | width: 200px;
246 | border: solid 1px gainsboro;
247 | height: 100px;
248 | vertical-align: top;
249 | margin: 0.4em;
250 | border-radius: 6px;
251 | padding: 0.5em;
252 | transition: all 0.2s ease;
253 | cursor: pointer;
254 | user-select: none;
255 | box-shadow: 1px 1px 2px #0000000f;
256 | line-height: 1.2;
257 | }
258 |
259 | .agency-list a {
260 | color: #3a3535;
261 | text-decoration: none;
262 | }
263 |
264 | html,
265 | body {
266 | font-family: sans-serif;
267 | }
268 |
269 | .agency-list li:hover {
270 | transform: scale(1.1);
271 | background: #f9f9f9;
272 | }
273 |
274 | .fadein {
275 | animation: fadein 2s;
276 | opacity: 1;
277 | }
278 |
279 | @keyframes fadein {
280 | 0% {
281 | opacity: 0;
282 | }
283 | 100% {
284 | opacity: 1;
285 | }
286 | }
287 |
288 | .svg {
289 | height: 35%;
290 | margin: 0 auto;
291 | }
292 |
293 | .donut {
294 | stroke: #c4c4c4;
295 | stroke-width: 6;
296 | animation: donut2 3s;
297 | stroke-dasharray: 80, 20;
298 | filter:url(#shadow);
299 | }
300 |
301 | @keyframes donut2 {
302 | 0% {
303 | stroke-dasharray: 0, 100;
304 | }
305 | 100% {
306 | stroke-dasharray: 80, 20;
307 | }
308 | }
309 |
310 | .our-card {
311 | background-color: rgba(255,255,255,0.8);
312 | }
313 |
314 | .card-header {
315 | color: #2E2757;
316 | font-size: 1.4em;
317 | font-style: bold;
318 | }
319 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # GovLens
2 |
3 | 
4 |
5 | ## About the project
6 |
7 | GovLens is a government transparency project developed by MuckRock and Code for Boston engineers. Our mission is to create a more open, accessible, and secure democracy through examining the technical elements of government agency websites. We use algorithms to score thousands of federal and state agencies based on their transparency, security, privacy, and accessibility. We then publish our findings and help communicate to government agencies possible improvements to their infrastructures that would better the agency as a whole.
8 |
9 | 
10 |
11 | ## Why?
12 |
13 | We get reminders all the time of how well our physical civic infrastructure is doing: Did my car hit a pothole? Are the swing sets covered in rust? It can be harder to see how well our digital civic infrastructure is holding up, however, particularly when it comes to the parts of the web that can be invisible to many people: How accessible is a site to people who rely on screen readers or who have reduced vision? Which third-party trackers have access to visitor data, and how is that data being guarded? Are government websites following basic best practices in utilizing secure connections?
14 |
15 | While we have a [National Bridge Inventory](https://www.fhwa.dot.gov/bridge/nbi.cfm) that monitors dangerous bridges and other federal agencies that monitor other core infrastructure issues, we do not have similar insights into how strong or weak much of our digital infrastructure is.
16 |
17 | GovLens helps to provide at least the start of an answer to that, by making those oftentimes overlooked aspects of digital infrastructure more visible via public report cards for each agency in our database as well as collated data for each jurisdiction and state, letting us see which areas of the country are leading the way and which might need a little more prodding.
18 |
19 | This is partially inspired by the work of Pulse.CIO.Gov, an official federal government website that monitored the adoption of HTTPS compliance among federal websites, as well as [SecureThe.News](https://securethe.news), which did the same thing for news websites. Both of these projects brought wider visibility to the issue and provided natural and effective peer pressure for website operators to improve. Our hope is we can do the same for local government, while also compiling a rich research data set for future analysis.
20 |
21 | ## Who is this site for?
22 | This site has three core planned audiences:
23 |
24 | * __The general public__, so that they’re better educated about the state of government digital infrastructure and why it matters.
25 | * __Government decision makers__, so that they can understand why they need to invest in better adhering to web standards as well as see where their sites stand compared to their peers.
26 | * __Local and national media outlets__, so as best to reach and influence the above categories.
27 |
28 |
29 | ## Getting started basics
30 |
31 | - [ ] Make sure [you've registered for the Code for Boston Slack](https://communityinviter.com/apps/cfb-public/code-for-boston-slack-invite).
32 | - [ ] Join the #MuckRock channel on Slack.
33 | - [ ] Ask a current member to be added to our Github organization ([They'll need to click here](https://github.com/codeforboston/GovLens/settings/collaboration)). After they've sent you an invite, you'll need to either check your email or notifications in Github (the alarm icon on the top right of your Github page) to accept the invite.
34 | - [ ] If you're interested in working on the backend of the site, [try following the instructions](#installation-instructions)
35 |
36 | ## Project goals
37 |
38 | The goal is to create an automatically updated database that tracks, over time, how well government agencies websites at the state, local, and federal levels follow best practices when it comes to HTTPS security, mobile friendliness, reader accessibility, and other key areas.
39 |
40 | Over time, we hope to show whether both individual agencies are improving or worsening, as well as help highlight national shifts along the metrics we monitor. Individual pages show the most recent snapshot ranking, but our API will make historical data available.
41 |
42 | ## Current status
43 |
44 | The project is currently in testing stages, as we work to both develop usable, accurate data and build a pipeline for regularly populating it. The site currently can run locally, but several of the data categories are filled with randomized testing data and any report cards generated are for **demonstration purposes only**. These scores do not represent actual scores for agencies.
45 |
46 | ## Installation instructions
47 |
48 | Install python3 if you haven't installed it yet.
49 | ```bash
50 | python3 --version
51 | ```
52 | If you do not see a version you will need to visit [Python](https://www.python.org/downloads/) or google how to install it for your operating system. You want python3 as well as pip3.
53 |
54 |
55 | Create a developer account on Github if you don't have one: [Github](https://github.com/)
56 |
57 | Fork the repository on Github, see: [Fork a Repo](https://help.github.com/en/github/getting-started-with-github/fork-a-repo)
58 |
59 | Clone your forked repository from the command line (this will create a GovLens directory):
60 | ```bash
61 | git clone https://github.com/--your-github-name--/GovLens.git
62 | ```
63 |
64 | Navigate to the base directory of the reposistory and prepare to install depedencies.
65 |
66 | To start, it is recommend to create a
67 | [virtual environment](https://virtualenv.pypa.io/en/stable/userguide/). If you have not
68 | used `virtualenv` before, install it with: `pip3 install virtualenv`.
69 |
70 | ```bash
71 | # Create a virtual environment to manage dependencies
72 | virtualenv venv
73 | source venv/bin/activate
74 | ```
75 |
76 | Now install the dependencies with pip:
77 |
78 | ```bash
79 | # Install requirements.txt
80 | pip3 install -r requirements.txt
81 | ```
82 |
83 | After the dependencies have installed, we want to prepare the database.
84 |
85 | ```bash
86 | # Perform data migrations
87 | python3 manage.py migrate
88 | ```
89 |
90 | Then, we need to import a CSV file containing existing agency information. Start by
91 | running a Django shell:
92 |
93 | ```bash
94 | python3 manage.py shell
95 |
96 | # From within the shell
97 | >>> from apps.civic_pulse.utils.load_models import *
98 | >>> fill_agency_objects()
99 | >>> exit()
100 | ```
101 |
102 | The following steps are needed in order to connect the api with the scrapers. If you do not wish to do that, then this may be skipped. We need to create a dummy user for the scraper to be able to access the api. The api is part of the Django projet.
103 | Note: The scrapers live in an independent environment not neccessarily in the same server as the Django website. The scrapers read and write data to the website using api endpoints.
104 |
105 | - create an admin user to be able to login to the admin portal of the website:
/admin
106 |
107 | ```bash
108 | python3 manage.py createsuperuser --username admin --email admin@admin.com
109 |
110 | # enter the password when prompted. It can be any password that you wish to use.
111 | # It is used for login to the admin website.
112 | ```
113 | - Start up the webserver
114 | ```bash
115 | python3 manage.py runserver
116 | ```
117 | Navigate in your browser to `http://127.0.0.1:8000/admin`. Log in with the new admin user you just created. Click on Agencys and you should see a list of
118 | agencies created with the ``fill_agency_objects`` command.
119 |
120 | To setup the scraper, read [the scraper README](scrapers/README.rst).
121 |
122 | ## Code formatting
123 | GovLens enforces code style using [Black](https://github.com/psf/black) and pep8 rules using [Flake8](http://flake8.pycqa.org/en/latest/).
124 | To set up automatic code formatting for black standards, perform the following steps:
125 | - `pip install -U black pre-commit`
126 | - `pre-commit install`
127 |
128 | To manually run Flake8 from project root:
129 | - `pip install -U flake8`
130 | - `flake8 . --ignore E501,W503,E203`
131 |
--------------------------------------------------------------------------------
/apps/civic_pulse/tests/test_api.py:
--------------------------------------------------------------------------------
1 | import json
2 | from django.test import TestCase
3 | from rest_framework.authtoken.models import Token
4 | from rest_framework.test import APIClient
5 | from django.contrib.auth.models import User
6 | from apps.civic_pulse.models import Agency, Entry
7 |
8 |
9 | class AgencyAPITest(TestCase):
10 | def setUp(self):
11 | Agency.objects.create(name="Test Agency 1")
12 | Agency.objects.create(name="Test Agency 2")
13 | self.client = APIClient()
14 |
15 | def test_GET(self):
16 | response = self.client.get("/api/agencies/")
17 | self.assertEqual(200, response.status_code)
18 |
19 | agencies_json = json.loads(response.content.decode("utf-8"))
20 | expected_results = [
21 | {
22 | "id": 1,
23 | "name": "Test Agency 1",
24 | "website": "",
25 | "twitter": "",
26 | "facebook": "",
27 | "phone_number": "",
28 | "address": "",
29 | "description": "",
30 | "notes": "",
31 | "last_successful_scrape": None,
32 | "scrape_counter": 0,
33 | },
34 | {
35 | "id": 2,
36 | "name": "Test Agency 2",
37 | "website": "",
38 | "twitter": "",
39 | "facebook": "",
40 | "phone_number": "",
41 | "address": "",
42 | "description": "",
43 | "notes": "",
44 | "last_successful_scrape": None,
45 | "scrape_counter": 0,
46 | },
47 | ]
48 | self.assertEqual(agencies_json, expected_results)
49 |
50 | def test_GET_Individual(self):
51 | response = self.client.get("/api/agencies/1/")
52 | self.assertEqual(200, response.status_code)
53 |
54 | agency_json = json.loads(response.content.decode("utf-8"))
55 | expected_results = {
56 | "id": 1,
57 | "name": "Test Agency 1",
58 | "website": "",
59 | "twitter": "",
60 | "facebook": "",
61 | "phone_number": "",
62 | "address": "",
63 | "description": "",
64 | "notes": "",
65 | "last_successful_scrape": None,
66 | "scrape_counter": 0,
67 | }
68 | self.assertEqual(agency_json, expected_results)
69 |
70 | def test_POST_Unauthorized(self):
71 | data = {"name": "Test POST Agency"}
72 | response = self.client.post("/api/agencies/", data=data, format="json")
73 | self.assertEqual(401, response.status_code)
74 |
75 | json_response = json.loads(response.content.decode("utf-8"))
76 | self.assertEqual(
77 | "Authentication credentials were not provided.", json_response["detail"]
78 | )
79 |
80 | def test_POST_Authorized(self):
81 | user = User.objects.create_user(
82 | username="test", email="test@test.test", password="test"
83 | )
84 | token = Token.objects.create(user=user)
85 |
86 | data = {"id": 5, "name": "Test POST Agency"}
87 |
88 | self.client.credentials(HTTP_AUTHORIZATION="Token " + token.key)
89 | response = self.client.post("/api/agencies/", data=data, format="json")
90 | self.assertEqual(201, response.status_code)
91 |
92 | json_response = json.loads(response.content.decode("utf-8"))
93 | expected_results = {
94 | "id": 5,
95 | "name": "Test POST Agency",
96 | "website": "",
97 | "twitter": "",
98 | "facebook": "",
99 | "phone_number": "",
100 | "address": "",
101 | "description": "",
102 | "notes": "",
103 | "last_successful_scrape": None,
104 | "scrape_counter": 0,
105 | }
106 |
107 | self.assertEqual(json_response, expected_results)
108 |
109 |
110 | class EntryAPITest(TestCase):
111 | def setUp(self):
112 | self.agency = Agency.objects.create(name="Test Agency 1", id=1)
113 | Entry.objects.create(agency_id=self.agency.id,)
114 | Entry.objects.create(
115 | agency_id=self.agency.id, https_enabled=True,
116 | )
117 |
118 | self.client = APIClient()
119 |
120 | def test_GET(self):
121 | response = self.client.get("/api/entries/")
122 | self.assertEqual(200, response.status_code)
123 |
124 | entries_json = json.loads(response.content.decode("utf-8"))
125 | expected_results = [
126 | {
127 | "id": 1,
128 | "agency": 1,
129 | "https_enabled": False,
130 | "has_privacy_policy": False,
131 | "mobile_friendly": False,
132 | "good_performance": False,
133 | "has_social_media": False,
134 | "has_contact_info": False,
135 | "notes": "",
136 | },
137 | {
138 | "id": 2,
139 | "agency": 1,
140 | "https_enabled": True,
141 | "has_privacy_policy": False,
142 | "mobile_friendly": False,
143 | "good_performance": False,
144 | "has_social_media": False,
145 | "has_contact_info": False,
146 | "notes": "",
147 | },
148 | ]
149 |
150 | self.assertEqual(entries_json, expected_results)
151 |
152 | def test_GET_Individual(self):
153 | response = self.client.get("/api/entries/1/")
154 | self.assertEqual(200, response.status_code)
155 |
156 | entry_json = json.loads(response.content.decode("utf-8"))
157 | expected_results = {
158 | "id": 1,
159 | "agency": 1,
160 | "https_enabled": False,
161 | "has_privacy_policy": False,
162 | "mobile_friendly": False,
163 | "good_performance": False,
164 | "has_social_media": False,
165 | "has_contact_info": False,
166 | "notes": "",
167 | }
168 |
169 | self.assertEqual(entry_json, expected_results)
170 |
171 | def test_POST_Unauthorized(self):
172 | data = {
173 | "agency": 1,
174 | "https_enabled": True,
175 | "has_privacy_policy": False,
176 | "mobile_friendly": False,
177 | "good_performance": False,
178 | "has_social_media": True,
179 | "has_contact_info": False,
180 | "notes": "",
181 | }
182 | response = self.client.post("/api/entries/", data=data, format="json")
183 | self.assertEqual(401, response.status_code)
184 |
185 | json_response = json.loads(response.content.decode("utf-8"))
186 | self.assertEqual(
187 | "Authentication credentials were not provided.", json_response["detail"]
188 | )
189 |
190 | def test_POST_Authorized(self):
191 | user = User.objects.create_user(
192 | username="test", email="test@test.test", password="test"
193 | )
194 | token = Token.objects.create(user=user)
195 |
196 | data = {
197 | "agency": 1,
198 | "https_enabled": True,
199 | "has_privacy_policy": False,
200 | "mobile_friendly": False,
201 | "good_performance": False,
202 | "has_social_media": True,
203 | "has_contact_info": False,
204 | "notes": "",
205 | }
206 |
207 | self.client.credentials(HTTP_AUTHORIZATION="Token " + token.key)
208 | response = self.client.post("/api/entries/", data=data, format="json")
209 | self.assertEqual(201, response.status_code)
210 |
211 | json_response = json.loads(response.content.decode("utf-8"))
212 | expected_results = {
213 | "id": 3,
214 | "agency": 1,
215 | "https_enabled": True,
216 | "has_privacy_policy": False,
217 | "mobile_friendly": False,
218 | "good_performance": False,
219 | "has_social_media": True,
220 | "has_contact_info": False,
221 | "notes": "",
222 | }
223 | self.assertEqual(json_response, expected_results)
224 |
--------------------------------------------------------------------------------
/apps/civic_pulse/templates/agency-detail.html:
--------------------------------------------------------------------------------
1 | {% extends 'base.html' %}
2 | {% load static %}
3 |
4 | {% block content %}
5 |
6 |
{{ agency }}
7 |
8 |
9 |
10 |
11 |
12 |
13 |
About {{ agency }}
14 |
Insert agency description
15 |
Visit website
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
7/10 {{ agency }} ranks higher than 75% of agencies on our security and privacy criteria. See why.
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
5/10 {{ agency }} ranks lower than 53% of agencies on our accessibility criteria. See why.
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
7/10 {{ agency }} ranks higher than 79% of agencies on our communication criteria. See why.
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
We test each agency once a week. We last checked {{ agency.website }} {{ last_entry.created_date | timesince }} ago.
56 |
Read more about our methodology.
57 |
58 |
59 |
60 |
61 |
62 |
Security & Privacy
63 |
64 |
65 |
66 |
67 | {% include 'check_box.html' with has_feature=last_entry.https_enabled %}
68 |
69 |
70 |
Uses HTTPS
71 |
72 |
73 |
74 |
75 | {% include 'check_box.html' with has_feature=True %}
76 |
77 |
78 |
No third party trackers
79 |
80 |
81 |
82 |
83 | {% include 'check_box.html' with has_feature=last_entry.has_privacy_policy %}
84 |
85 |
86 |
Has a privacy policy
87 |
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
Website Accessibility
96 |
97 |
98 |
99 |
100 | {% include 'check_box.html' with has_feature=last_entry.mobile_friendly %}
101 |
102 |
103 |
Is mobile friendly
104 |
105 |
106 |
107 |
108 | {% include 'check_box.html' with has_feature=True %}
109 |
110 |
111 |
Passes Google's accessibility test
112 |
113 |
114 |
115 |
116 | {% include 'check_box.html' with has_feature=last_entry.good_performance %}
117 |
118 |
119 |
Passes Google's speed test
120 |
121 |
122 |
123 |
124 |
125 |
126 |
127 |
128 |
Communication
129 |
130 |
131 |
132 |
133 | {% include 'check_box.html' with has_feature=last_entry.has_social_media %}
134 |
135 |
136 |
Uses social media
137 |
138 |
139 |
140 |
141 | {% include 'check_box.html' with has_feature=last_entry.has_contact_info %}
142 |
143 |
144 |
Multiple ways to contact
145 |
146 |
147 |
148 |
149 | {% include 'check_box.html' with has_feature=True %}
150 |
151 |
152 |
Posts meetings and minutes
153 |
154 |
155 |
156 |
157 |
158 |
159 |
160 |
161 |
162 |
193 |
194 |
230 |
231 | {% endblock content %}
232 |
--------------------------------------------------------------------------------