├── files
├── .gitignore
├── WebSphereの設定.txt
├── Kotlin.txt
├── Scala.txt
├── PHP.txt
├── Ruby.txt
├── DotNetFramework.txt
├── Go.txt
├── Nodejs.txt
├── Python.txt
├── DotNetCore.txt
└── Javaエージェント.txt
├── README.md
├── requirements.txt
├── .vimrc
├── docker-compose.yml
├── Dockerfile
├── feeds
├── end_of_support.xml
├── go_support_tech_update.xml
├── php_support_tech_update.xml
├── java_support_tech_update.xml
├── ruby_support_tech_update.xml
├── python_support_tech_update.xml
├── nodejs_support_tech_update.xml
├── dotnet_core_support_tech_update.xml
├── dotnet_framework_support_tech_update.xml
└── index.html
├── .github
└── workflows
│ ├── build.yml
│ └── rss.yml
└── work
├── go_rlsnote.py
├── php_rlsnote.py
├── java_rlsnote.py
├── ruby_rlsnote.py
├── python_rlsnote.py
├── dotnet_core_rlsnote.py
├── nodejs_rlsnote.py
├── nodejs_beta_rlsnote.py
├── dotnet_framework_rlsnote.py
├── nodejs_protect_rlsnote.py
├── contrast_rlsnote.py
├── contrast_rlsnote_saas.py
├── contrast_rlsnote_eop.py
├── go_support_tech.py
├── php_support_tech.py
├── ruby_support_tech.py
├── python_support_tech.py
├── nodejs_support_tech.py
├── java_support_tech.py
├── dotnet_core_support_tech.py
├── dotnet_framework_support_tech.py
└── agent_end_support_chk.py
/files/.gitignore:
--------------------------------------------------------------------------------
1 | *.tmp
2 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # contrast-documentation-rss
2 |
--------------------------------------------------------------------------------
/files/WebSphereの設定.txt:
--------------------------------------------------------------------------------
1 | WebSphereをアプリケーションサーバとして使用している場合は、エージェントをデプロイする前に、WebsphereでJavaエージェントを設定するに記載されている情報もご確認ください。
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | beautifulsoup4==4.12.2
2 | lxml==4.9.3
3 | feedgenerator==2.1.0
4 | django==4.2.24
5 | python-dateutil==2.9.0
6 |
--------------------------------------------------------------------------------
/files/Kotlin.txt:
--------------------------------------------------------------------------------
1 | テクノロジ
2 | サポート対象バージョン
3 | Contrastエージェント
4 | 3.9.1.25108以降
5 | Javaランタイム
6 | JDK 8以上
7 | Kotlinのバージョン
8 | 1.5.x - 1.8.x
--------------------------------------------------------------------------------
/.vimrc:
--------------------------------------------------------------------------------
1 | set nu
2 | set noai
3 | set nobackup
4 | set paste
5 | set expandtab
6 | set tabstop=4
7 | set shiftwidth=4
8 | set noswapfile
9 | syntax on
10 |
--------------------------------------------------------------------------------
/files/Scala.txt:
--------------------------------------------------------------------------------
1 | テクノロジ
2 | サポート対象バージョン
3 | Contrastエージェント
4 | 3.8.11.23624以降
5 | Javaランタイム
6 | JDK 8以上
7 | Scalaのバージョン
8 | 2.12, 2.13
9 | Playのバージョン
10 | 2.6, 2.7, 2.8
11 | Akka HTTP
12 | 10.2.4
--------------------------------------------------------------------------------
/docker-compose.yml:
--------------------------------------------------------------------------------
1 | version: '3'
2 |
3 | services:
4 | python3:
5 | image: ghcr.io/contrast-security-oss/contrast-documentation-rss/python3:latest
6 | platform: linux/amd64
7 | build: .
8 | container_name: 'rssgen.python3'
9 | tty: true
10 | volumes:
11 | - ./work:/work
12 | - ./files:/files
13 | - ./feeds:/feeds
14 |
15 |
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM python:3.9.18
2 | USER root
3 |
4 | RUN apt-get update
5 | RUN apt-get -y install locales && \
6 | localedef -f UTF-8 -i ja_JP ja_JP.UTF-8
7 | ENV LANG ja_JP.UTF-8
8 | ENV LANGUAGE ja_JP:ja
9 | ENV LC_ALL ja_JP.UTF-8
10 | ENV TZ JST-9
11 | ENV TERM xterm
12 | ENV PYTHONDONTWRITEBYTECODE 1
13 | ENV PYTHONUNBUFFERED 1
14 |
15 | RUN apt-get install -y vim less
16 | RUN pip install --upgrade pip
17 | RUN pip install --upgrade setuptools
18 |
19 | WORKDIR /work
20 | COPY ./requirements.txt /root
21 | RUN pip install -r /root/requirements.txt
22 | COPY ./.vimrc /root
23 |
24 |
--------------------------------------------------------------------------------
/files/PHP.txt:
--------------------------------------------------------------------------------
1 | このエージェントでは、以下のテクノロジをサポートしています。
2 | テクノロジ
3 | サポート対象バージョン
4 | 備考
5 | 言語のバージョン
6 | 7.4、8.0、8.1、8.2、8.3、8.4
7 | エージェントは、mbstringおよびcurl拡張モジュールに依存します。
8 | NTS版のPHPのみをサポート。
9 | アプリケーションフレームワーク
10 | Laravel 6〜12
11 | Symfony 5〜7
12 | Drupal 8〜11
13 | サーバ
14 | Apache
15 | PHP-FPM
16 | その他、mod_phpやphp-fpmを使用しているサーバでも動作する可能性がありますが、現在はサポートされていません。
17 | php-fpmを使用している場合は、clear_envをnoに設定して下さい。この設定により、必要な環境変数がPHPワーカープロセスで使用できるようになります。 デフォルト値を使用すると、全てのシェル環境変数が PHP ワーカープロセスに到達する前にクリアされてしまい、 エージェントが正しく機能しなくなります。
18 | php-fpmプール設定ファイルは、通常/usr/local/etc/php-fpm.d/www.confにあります。
19 | パッケージ管理
20 | Composer
21 | SCAでサポートされています。
22 | コンテンツ管理システム
23 | WordPress 6
--------------------------------------------------------------------------------
/feeds/end_of_support.xml:
--------------------------------------------------------------------------------
1 |
2 |
{0}
'.format(s) for s in desc_buffer]), pubdate=pubdate, unique_id=guid) 53 | except IndexError: 54 | continue 55 | 56 | str_val = feed.writeString('utf-8') 57 | dom = xml.dom.minidom.parseString(str_val) 58 | with open('/feeds/go_rlsnote.xml','w') as fp: 59 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 60 | 61 | if __name__ == "__main__": 62 | main() 63 | 64 | -------------------------------------------------------------------------------- /work/php_rlsnote.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import html 8 | import hashlib 9 | 10 | def main(): 11 | locale.setlocale(locale.LC_TIME, "C") 12 | url = 'https://docs.contrastsecurity.jp/ja/php-agent-release-notes-and-archive.html' 13 | res = req.urlopen(url) 14 | soup = BeautifulSoup(res, 'lxml') 15 | elems = soup.select('section.section.accordion') 16 | modified_date = soup.select_one('span.formatted-date').text.strip() 17 | 18 | feed = Rss201rev2Feed( 19 | title='PHP Agent Release Note', 20 | link='https://contrastsecurity.dev/contrast-documentation-rss', 21 | description='PHP Agent Release Note', 22 | language='ja', 23 | author_name="Contrast Security Japan G.K.", 24 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/php_rlsnote.xml', 25 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 26 | ) 27 | for elem in elems: 28 | try: 29 | id_str = elem.get("id") 30 | #pubdate_str = elem.get("data-publication-date") # November 6, 2023 31 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 32 | pubdate = None 33 | if pubdate_str: 34 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 35 | title = elem.select('h3.title')[0].text.strip() 36 | if not title.lower().startswith('php'): 37 | continue 38 | #desc = elem.select('div.panel-body')[0].text 39 | desc_buffer = [] 40 | #for elem2 in elem.select('p, div'): 41 | for elem2 in elem.select('p'): 42 | if elem2.parent.name == 'li': 43 | desc_buffer.append('・%s' % elem2.text) 44 | else: 45 | desc_buffer.append('%s' % elem2.text) 46 | #print(elem2.text) 47 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 48 | url = 'https://docs.contrastsecurity.jp/ja/php-agent-release-notes-and-archive.html#%s' % id_str 49 | guid = 'https://docs.contrastsecurity.jp/ja/php-agent-release-notes-and-archive.html#%s' % id_hash 50 | if not 'リリース日' in ''.join(desc_buffer): 51 | continue 52 | feed.add_item(title=title, link=url, description=''.join(['{0}
'.format(s) for s in desc_buffer]), pubdate=pubdate, unique_id=guid) 53 | except IndexError: 54 | continue 55 | 56 | str_val = feed.writeString('utf-8') 57 | dom = xml.dom.minidom.parseString(str_val) 58 | with open('/feeds/php_rlsnote.xml','w') as fp: 59 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 60 | 61 | if __name__ == "__main__": 62 | main() 63 | 64 | -------------------------------------------------------------------------------- /work/java_rlsnote.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import html 8 | import hashlib 9 | 10 | def main(): 11 | locale.setlocale(locale.LC_TIME, "C") 12 | url = 'https://docs.contrastsecurity.jp/ja/java-agent-release-notes-and-archive.html' 13 | res = req.urlopen(url) 14 | soup = BeautifulSoup(res, 'lxml') 15 | elems = soup.select('section.section.accordion') 16 | modified_date = soup.select_one('span.formatted-date').text.strip() 17 | 18 | feed = Rss201rev2Feed( 19 | title='Java Agent Release Note', 20 | link='https://contrastsecurity.dev/contrast-documentation-rss', 21 | description='Java Agent Release Note', 22 | language='ja', 23 | author_name="Contrast Security Japan G.K.", 24 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/java_rlsnote.xml', 25 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 26 | ) 27 | for elem in elems: 28 | try: 29 | id_str = elem.get("id") 30 | #pubdate_str = elem.get("data-publication-date") # November 6, 2023 31 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 32 | pubdate = None 33 | if pubdate_str: 34 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 35 | title = elem.select('h3.title')[0].text.strip() 36 | if not title.lower().startswith('java'): 37 | continue 38 | #desc = elem.select('div.panel-body')[0].text 39 | desc_buffer = [] 40 | #for elem2 in elem.select('p, div'): 41 | for elem2 in elem.select('p'): 42 | if elem2.parent.name == 'li': 43 | desc_buffer.append('・%s' % elem2.text) 44 | else: 45 | desc_buffer.append('%s' % elem2.text) 46 | #print(elem2.text) 47 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 48 | url = 'https://docs.contrastsecurity.jp/ja/java-agent-release-notes-and-archive.html#%s' % id_str 49 | guid = 'https://docs.contrastsecurity.jp/ja/java-agent-release-notes-and-archive.html#%s' % id_hash 50 | if not 'リリース日' in ''.join(desc_buffer): 51 | continue 52 | feed.add_item(title=title, link=url, description=''.join(['{0}
'.format(s) for s in desc_buffer]), pubdate=pubdate, unique_id=guid) 53 | except IndexError: 54 | continue 55 | 56 | str_val = feed.writeString('utf-8') 57 | dom = xml.dom.minidom.parseString(str_val) 58 | with open('/feeds/java_rlsnote.xml','w') as fp: 59 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 60 | 61 | if __name__ == "__main__": 62 | main() 63 | 64 | -------------------------------------------------------------------------------- /work/ruby_rlsnote.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import html 8 | import hashlib 9 | 10 | def main(): 11 | locale.setlocale(locale.LC_TIME, "C") 12 | url = 'https://docs.contrastsecurity.jp/ja/ruby-agent-release-notes-and-archive.html' 13 | res = req.urlopen(url) 14 | soup = BeautifulSoup(res, 'lxml') 15 | elems = soup.select('section.section.accordion') 16 | modified_date = soup.select_one('span.formatted-date').text.strip() 17 | 18 | feed = Rss201rev2Feed( 19 | title='Ruby Agent Release Note', 20 | link='https://contrastsecurity.dev/contrast-documentation-rss', 21 | description='Ruby Agent Release Note', 22 | language='ja', 23 | author_name="Contrast Security Japan G.K.", 24 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/ruby_rlsnote.xml', 25 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 26 | ) 27 | for elem in elems: 28 | try: 29 | id_str = elem.get("id") 30 | #pubdate_str = elem.get("data-publication-date") # November 6, 2023 31 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 32 | pubdate = None 33 | if pubdate_str: 34 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 35 | title = elem.select('h3.title')[0].text.strip() 36 | if not title.lower().startswith('ruby'): 37 | continue 38 | #desc = elem.select('div.panel-body')[0].text 39 | desc_buffer = [] 40 | #for elem2 in elem.select('p, div'): 41 | for elem2 in elem.select('p'): 42 | if elem2.parent.name == 'li': 43 | desc_buffer.append('・%s' % elem2.text) 44 | else: 45 | desc_buffer.append('%s' % elem2.text) 46 | #print(elem2.text) 47 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 48 | url = 'https://docs.contrastsecurity.jp/ja/ruby-agent-release-notes-and-archive.html#%s' % id_str 49 | guid = 'https://docs.contrastsecurity.jp/ja/ruby-agent-release-notes-and-archive.html#%s' % id_hash 50 | if not 'リリース日' in ''.join(desc_buffer): 51 | continue 52 | feed.add_item(title=title, link=url, description=''.join(['{0}
'.format(s) for s in desc_buffer]), pubdate=pubdate, unique_id=guid) 53 | except IndexError: 54 | continue 55 | 56 | str_val = feed.writeString('utf-8') 57 | dom = xml.dom.minidom.parseString(str_val) 58 | with open('/feeds/ruby_rlsnote.xml','w') as fp: 59 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 60 | 61 | if __name__ == "__main__": 62 | main() 63 | 64 | -------------------------------------------------------------------------------- /work/python_rlsnote.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import html 8 | import hashlib 9 | 10 | def main(): 11 | locale.setlocale(locale.LC_TIME, "C") 12 | url = 'https://docs.contrastsecurity.jp/ja/python-agent-release-notes-and-archive.html' 13 | res = req.urlopen(url) 14 | soup = BeautifulSoup(res, 'lxml') 15 | elems = soup.select('section.section.accordion') 16 | modified_date = soup.select_one('span.formatted-date').text.strip() 17 | 18 | feed = Rss201rev2Feed( 19 | title='Python Agent Release Note', 20 | link='https://contrastsecurity.dev/contrast-documentation-rss', 21 | description='Python Agent Release Note', 22 | language='ja', 23 | author_name="Contrast Security Japan G.K.", 24 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/python_rlsnote.xml', 25 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 26 | ) 27 | for elem in elems: 28 | try: 29 | id_str = elem.get("id") 30 | #pubdate_str = elem.get("data-publication-date") # November 6, 2023 31 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 32 | pubdate = None 33 | if pubdate_str: 34 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 35 | title = elem.select('h3.title')[0].text.strip() 36 | if not title.lower().startswith('python'): 37 | continue 38 | #desc = elem.select('div.panel-body')[0].text 39 | desc_buffer = [] 40 | #for elem2 in elem.select('p, div'): 41 | for elem2 in elem.select('p'): 42 | if elem2.parent.name == 'li': 43 | desc_buffer.append('・%s' % elem2.text) 44 | else: 45 | desc_buffer.append('%s' % elem2.text) 46 | #print(elem2.text) 47 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 48 | url = 'https://docs.contrastsecurity.jp/ja/python-agent-release-notes-and-archive.html#%s' % id_str 49 | guid = 'https://docs.contrastsecurity.jp/ja/python-agent-release-notes-and-archive.html#%s' % id_hash 50 | if not 'リリース日' in ''.join(desc_buffer): 51 | continue 52 | feed.add_item(title=title, link=url, description=''.join(['{0}
'.format(s) for s in desc_buffer]), pubdate=pubdate, unique_id=guid) 53 | except IndexError: 54 | continue 55 | 56 | str_val = feed.writeString('utf-8') 57 | dom = xml.dom.minidom.parseString(str_val) 58 | with open('/feeds/python_rlsnote.xml','w') as fp: 59 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 60 | 61 | if __name__ == "__main__": 62 | main() 63 | 64 | -------------------------------------------------------------------------------- /work/dotnet_core_rlsnote.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import html 8 | import hashlib 9 | 10 | def main(): 11 | locale.setlocale(locale.LC_TIME, "C") 12 | url = 'https://docs.contrastsecurity.jp/ja/-net-core-agent-release-notes-and-archive.html' 13 | res = req.urlopen(url) 14 | soup = BeautifulSoup(res, 'lxml') 15 | elems = soup.select('section.section.accordion') 16 | modified_date = soup.select_one('span.formatted-date').text.strip() 17 | 18 | feed = Rss201rev2Feed( 19 | title='.NET Core Agent Release Note', 20 | link='https://contrastsecurity.dev/contrast-documentation-rss', 21 | description='.NET Core Agent Release Note', 22 | language='ja', 23 | author_name="Contrast Security Japan G.K.", 24 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/dotnet_core_rlsnote.xml', 25 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 26 | ) 27 | for elem in elems: 28 | try: 29 | id_str = elem.get("id") 30 | #pubdate_str = elem.get("data-publication-date") # November 6, 2023 31 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 32 | pubdate = None 33 | if pubdate_str: 34 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 35 | title = elem.select('h3.title')[0].text.strip() 36 | if not title.lower().startswith('.net'): 37 | continue 38 | #desc = elem.select('div.panel-body')[0].text 39 | desc_buffer = [] 40 | #for elem2 in elem.select('p, div'): 41 | for elem2 in elem.select('p'): 42 | if elem2.parent.name == 'li': 43 | desc_buffer.append('・%s' % elem2.text) 44 | else: 45 | desc_buffer.append('%s' % elem2.text) 46 | #print(elem2.text) 47 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 48 | url = 'https://docs.contrastsecurity.jp/ja/-net-core-agent-release-notes-and-archive.html#%s' % id_str 49 | guid = 'https://docs.contrastsecurity.jp/ja/-net-core-agent-release-notes-and-archive.html#%s' % id_hash 50 | if not 'リリース日' in ''.join(desc_buffer): 51 | continue 52 | feed.add_item(title=title, link=url, description=''.join(['{0}
'.format(html.escape(s)) for s in desc_buffer]), pubdate=pubdate, unique_id=guid) 53 | except IndexError: 54 | continue 55 | 56 | str_val = feed.writeString('utf-8') 57 | dom = xml.dom.minidom.parseString(str_val) 58 | with open('/feeds/dotnet_core_rlsnote.xml','w') as fp: 59 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 60 | 61 | if __name__ == "__main__": 62 | main() 63 | 64 | -------------------------------------------------------------------------------- /work/nodejs_rlsnote.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import html 8 | import hashlib 9 | import re 10 | 11 | def main(): 12 | locale.setlocale(locale.LC_TIME, "C") 13 | url = 'https://docs.contrastsecurity.jp/ja/node-js-agent-release-notes-and-archive.html' 14 | res = req.urlopen(url) 15 | soup = BeautifulSoup(res, 'lxml') 16 | elems = soup.select('section.section.accordion') 17 | modified_date = soup.select_one('span.formatted-date').text.strip() 18 | 19 | feed = Rss201rev2Feed( 20 | title='Node.js Agent Release Note', 21 | link='https://contrastsecurity.dev/contrast-documentation-rss', 22 | description='Node.js Agent Release Note', 23 | language='ja', 24 | author_name="Contrast Security Japan G.K.", 25 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/nodejs_rlsnote.xml', 26 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 27 | ) 28 | for elem in elems: 29 | try: 30 | id_str = elem.get("id") 31 | #pubdate_str = elem.get("data-publication-date") # November 6, 2023 32 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 33 | pubdate = None 34 | if pubdate_str: 35 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 36 | title = elem.select('h3.title')[0].text.strip() 37 | if not title.lower().startswith('node') or re.search(r'\d$', title) is None: # ベータも無視 38 | continue 39 | #desc = elem.select('div.panel-body')[0].text 40 | desc_buffer = [] 41 | #for elem2 in elem.select('p, div'): 42 | for elem2 in elem.select('p'): 43 | if elem2.parent.name == 'li': 44 | desc_buffer.append('・%s' % elem2.text) 45 | else: 46 | desc_buffer.append('%s' % elem2.text) 47 | #print(elem2.text) 48 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 49 | url = 'https://docs.contrastsecurity.jp/ja/node-js-agent-release-notes-and-archive.html#%s' % id_str 50 | guid = 'https://docs.contrastsecurity.jp/ja/node-js-agent-release-notes-and-archive.html#%s' % id_hash 51 | if not 'リリース日' in ''.join(desc_buffer): 52 | continue 53 | feed.add_item(title=title, link=url, description=''.join(['{0}
'.format(s) for s in desc_buffer]), pubdate=pubdate, unique_id=guid) 54 | except IndexError: 55 | continue 56 | 57 | str_val = feed.writeString('utf-8') 58 | dom = xml.dom.minidom.parseString(str_val) 59 | with open('/feeds/nodejs_rlsnote.xml','w') as fp: 60 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 61 | 62 | if __name__ == "__main__": 63 | main() 64 | 65 | -------------------------------------------------------------------------------- /work/nodejs_beta_rlsnote.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import html 8 | import hashlib 9 | import re 10 | 11 | def main(): 12 | locale.setlocale(locale.LC_TIME, "C") 13 | url = 'https://docs.contrastsecurity.jp/ja/node-js-agent-release-notes-and-archive.html' 14 | res = req.urlopen(url) 15 | soup = BeautifulSoup(res, 'lxml') 16 | elems = soup.select('section.section.accordion') 17 | modified_date = soup.select_one('span.formatted-date').text.strip() 18 | 19 | feed = Rss201rev2Feed( 20 | title='Node.js Agent Beta Release Note', 21 | link='https://contrastsecurity.dev/contrast-documentation-rss', 22 | description='Node.js Agent Beta Release Note', 23 | language='ja', 24 | author_name="Contrast Security Japan G.K.", 25 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/nodejs_beta_rlsnote.xml', 26 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 27 | ) 28 | for elem in elems: 29 | try: 30 | id_str = elem.get("id") 31 | #pubdate_str = elem.get("data-publication-date") # November 6, 2023 32 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 33 | pubdate = None 34 | if pubdate_str: 35 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 36 | title = elem.select('h4.title')[0].text.strip() 37 | if not (title.lower().startswith('node') and re.search(r'\d$', id_str)): 38 | continue 39 | #desc = elem.select('div.panel-body')[0].text 40 | desc_buffer = [] 41 | #for elem2 in elem.select('p, div'): 42 | for elem2 in elem.select('p'): 43 | if elem2.parent.name == 'li': 44 | desc_buffer.append('・%s' % elem2.text) 45 | else: 46 | desc_buffer.append('%s' % elem2.text) 47 | #print(elem2.text) 48 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 49 | url = 'https://docs.contrastsecurity.jp/ja/node-js-agent-release-notes-and-archive.html#%s' % id_str 50 | guid = 'https://docs.contrastsecurity.jp/ja/node-js-agent-release-notes-and-archive.html#%s' % id_hash 51 | if not 'リリース日' in ''.join(desc_buffer): 52 | continue 53 | feed.add_item(title=title, link=url, description=''.join(['{0}
'.format(s) for s in desc_buffer]), pubdate=pubdate, unique_id=guid) 54 | except IndexError: 55 | continue 56 | 57 | str_val = feed.writeString('utf-8') 58 | dom = xml.dom.minidom.parseString(str_val) 59 | with open('/feeds/nodejs_beta_rlsnote.xml','w') as fp: 60 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 61 | 62 | if __name__ == "__main__": 63 | main() 64 | 65 | -------------------------------------------------------------------------------- /work/dotnet_framework_rlsnote.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import html 8 | import hashlib 9 | 10 | def main(): 11 | locale.setlocale(locale.LC_TIME, "C") 12 | url = 'https://docs.contrastsecurity.jp/ja/-net-framework-agent-release-notes-and-archive.html' 13 | res = req.urlopen(url) 14 | soup = BeautifulSoup(res, 'lxml') 15 | elems = soup.select('section.section.accordion') 16 | modified_date = soup.select_one('span.formatted-date').text.strip() 17 | 18 | feed = Rss201rev2Feed( 19 | title='.NET Framework Agent Release Note', 20 | link='https://contrastsecurity.dev/contrast-documentation-rss', 21 | description='.NET Framework Agent Release Note', 22 | language='ja', 23 | author_name="Contrast Security Japan G.K.", 24 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/dotnet_framework_rlsnote.xml', 25 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 26 | ) 27 | for elem in elems: 28 | try: 29 | id_str = elem.get("id") 30 | #pubdate_str = elem.get("data-publication-date") # November 6, 2023 31 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 32 | pubdate = None 33 | if pubdate_str: 34 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 35 | title = elem.select('h3.title')[0].text.strip() 36 | if not title.lower().startswith('.net'): 37 | continue 38 | #desc = elem.select('div.panel-body')[0].text 39 | desc_buffer = [] 40 | #for elem2 in elem.select('p, div'): 41 | for elem2 in elem.select('p'): 42 | if elem2.parent.name == 'li': 43 | desc_buffer.append('・%s' % elem2.text) 44 | else: 45 | desc_buffer.append('%s' % elem2.text) 46 | #print(elem2.text) 47 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 48 | url = 'https://docs.contrastsecurity.jp/ja/-net-framework-agent-release-notes-and-archive.html#%s' % id_str 49 | guid = 'https://docs.contrastsecurity.jp/ja/-net-framework-agent-release-notes-and-archive.html#%s' % id_hash 50 | if not 'リリース日' in ''.join(desc_buffer): 51 | continue 52 | feed.add_item(title=title, link=url, description=''.join(['{0}
'.format(html.escape(s)) for s in desc_buffer]), pubdate=pubdate, unique_id=guid) 53 | except IndexError: 54 | continue 55 | 56 | str_val = feed.writeString('utf-8') 57 | dom = xml.dom.minidom.parseString(str_val) 58 | with open('/feeds/dotnet_framework_rlsnote.xml','w') as fp: 59 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 60 | 61 | if __name__ == "__main__": 62 | main() 63 | 64 | -------------------------------------------------------------------------------- /work/nodejs_protect_rlsnote.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import html 8 | import hashlib 9 | import re 10 | 11 | def main(): 12 | locale.setlocale(locale.LC_TIME, "C") 13 | url = 'https://docs.contrastsecurity.jp/ja/node-js-agent-release-notes-and-archive.html' 14 | res = req.urlopen(url) 15 | soup = BeautifulSoup(res, 'lxml') 16 | elems = soup.select('section.section.accordion') 17 | modified_date = soup.select_one('span.formatted-date').text.strip() 18 | 19 | feed = Rss201rev2Feed( 20 | title='Node.js Agent Protect Release Note', 21 | link='https://contrastsecurity.dev/contrast-documentation-rss', 22 | description='Node.js Agent Protect Release Note', 23 | language='ja', 24 | author_name="Contrast Security Japan G.K.", 25 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/nodejs_protect_rlsnote.xml', 26 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 27 | ) 28 | for elem in elems: 29 | try: 30 | id_str = elem.get("id") 31 | #pubdate_str = elem.get("data-publication-date") # November 6, 2023 32 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 33 | pubdate = None 34 | if pubdate_str: 35 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 36 | title = elem.select('h4.title')[0].text.strip() 37 | if not (title.lower().startswith('node') and re.search(r'protect$', id_str)): 38 | continue 39 | #desc = elem.select('div.panel-body')[0].text 40 | desc_buffer = [] 41 | #for elem2 in elem.select('p, div'): 42 | for elem2 in elem.select('p'): 43 | if elem2.parent.name == 'li': 44 | desc_buffer.append('・%s' % elem2.text) 45 | else: 46 | desc_buffer.append('%s' % elem2.text) 47 | #print(elem2.text) 48 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 49 | url = 'https://docs.contrastsecurity.jp/ja/node-js-agent-release-notes-and-archive.html#%s' % id_str 50 | guid = 'https://docs.contrastsecurity.jp/ja/node-js-agent-release-notes-and-archive.html#%s' % id_hash 51 | if not 'リリース日' in ''.join(desc_buffer): 52 | continue 53 | feed.add_item(title=title, link=url, description=''.join(['{0}
'.format(s) for s in desc_buffer]), pubdate=pubdate, unique_id=guid) 54 | except IndexError: 55 | continue 56 | 57 | str_val = feed.writeString('utf-8') 58 | dom = xml.dom.minidom.parseString(str_val) 59 | with open('/feeds/nodejs_protect_rlsnote.xml','w') as fp: 60 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 61 | 62 | if __name__ == "__main__": 63 | main() 64 | 65 | -------------------------------------------------------------------------------- /work/contrast_rlsnote.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import re 8 | import html 9 | import hashlib 10 | 11 | def main(): 12 | url = 'https://docs.contrastsecurity.jp/ja/release.html' 13 | res = req.urlopen(url) 14 | soup = BeautifulSoup(res, 'lxml') 15 | elems = soup.select('section.section') 16 | modified_date = soup.select_one('span.formatted-date').text.strip() 17 | #print(modified_date) 18 | 19 | feed = Rss201rev2Feed( 20 | title='Contrast Release Note', 21 | link='https://contrastsecurity.dev/contrast-documentation-rss', 22 | description='Contrast Release Note', 23 | language='ja', 24 | author_name="Contrast Security Japan G.K.", 25 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/contrast_rlsnote.xml', 26 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 27 | ) 28 | 29 | id_ptn = re.compile(r'^[0-9]{1,2}月-[0-9\-]+-$') 30 | title_ptn = re.compile(r'^[0-9]{1,2}月\([0-9\.]+\)$') 31 | 32 | for elem in elems: 33 | try: 34 | id_str = elem.get("id").strip() 35 | title = elem.select('h3.title')[0].text.strip() 36 | if not id_ptn.search(id_str) or not title_ptn.search(title): 37 | continue 38 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 39 | pubdate = None 40 | if pubdate_str: 41 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 42 | #print(id_str, pubdate_str, title) 43 | desc_buffer = [] 44 | for elem2 in elem.select('section.section'): 45 | id_str2 = elem2.get("id").strip() 46 | #print('- ', elem2.select_one('div.titlepage').text) 47 | desc_buffer.append('%s' % elem2.select_one('div.titlepage').text) 48 | for elem3 in elem2.select('li.listitem'): 49 | #print(' - ', elem3.select_one('p').text) 50 | desc_buffer.append('・%s' % elem3.select_one('p').text) 51 | #print(id_str, elem.get('data-legacy-id')) 52 | #if not title.lower().startswith('java'): 53 | # continue 54 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 55 | url = 'https://docs.contrastsecurity.jp/ja/release.html#%s' % id_str 56 | guid = 'https://docs.contrastsecurity.jp/ja/release.html#%s' % id_hash 57 | if not '月' in title: 58 | continue 59 | feed.add_item(title=title, link=url, description=''.join(['{0}
'.format(s) for s in desc_buffer]), pubdate=pubdate, unique_id=guid) 60 | except IndexError: 61 | continue 62 | str_val = feed.writeString('utf-8') 63 | dom = xml.dom.minidom.parseString(str_val) 64 | with open('/feeds/contrast_rlsnote.xml','w') as fp: 65 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 66 | 67 | if __name__ == "__main__": 68 | main() 69 | 70 | -------------------------------------------------------------------------------- /work/contrast_rlsnote_saas.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import re 8 | import html 9 | import hashlib 10 | 11 | def main(): 12 | url = 'https://docs.contrastsecurity.jp/ja/release-hosted.html' 13 | res = req.urlopen(url) 14 | soup = BeautifulSoup(res, 'lxml') 15 | elems = soup.select('section.section') 16 | modified_date = soup.select_one('span.formatted-date').text.strip() 17 | #print(modified_date) 18 | 19 | feed = Rss201rev2Feed( 20 | title='Contrast Release Note(SaaS)', 21 | link='https://contrastsecurity.dev/contrast-documentation-rss', 22 | description='Contrast Release Note(SaaS)', 23 | language='ja', 24 | author_name="Contrast Security Japan G.K.", 25 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/contrast_rlsnote_saas.xml', 26 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 27 | ) 28 | 29 | id_ptn = re.compile(r'^20[0-9]{2}年[0-9]{1,2}月$') 30 | title_ptn = re.compile(r'^20[0-9]{2}年[0-9]{1,2}月$') 31 | 32 | for elem in elems: 33 | try: 34 | id_str = elem.get("id").strip() 35 | title = elem.select('h3.title')[0].text.strip() 36 | if not id_ptn.search(id_str) or not title_ptn.search(title): 37 | continue 38 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 39 | pubdate = None 40 | if pubdate_str: 41 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 42 | #print(id_str, pubdate_str, title) 43 | desc_buffer = [] 44 | for elem2 in elem.select('section.section'): 45 | id_str2 = elem2.get("id").strip() 46 | #print('- ', elem2.select_one('div.titlepage').text) 47 | desc_buffer.append('%s' % elem2.select_one('div.titlepage').text) 48 | for elem3 in elem2.select('li.listitem'): 49 | #print(' - ', elem3.select_one('p').text) 50 | desc_buffer.append('・%s' % elem3.select_one('p').text) 51 | #print(id_str, elem.get('data-legacy-id')) 52 | #if not title.lower().startswith('java'): 53 | # continue 54 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 55 | url = 'https://docs.contrastsecurity.jp/ja/release-hosted.html#%s' % id_str 56 | guid = 'https://docs.contrastsecurity.jp/ja/release-hosted.html#%s' % id_hash 57 | if not '月' in title: 58 | continue 59 | feed.add_item(title=title, link=url, description=''.join(['{0}
'.format(s) for s in desc_buffer]), pubdate=pubdate, unique_id=guid) 60 | except IndexError: 61 | continue 62 | str_val = feed.writeString('utf-8') 63 | dom = xml.dom.minidom.parseString(str_val) 64 | with open('/feeds/contrast_rlsnote_saas.xml','w') as fp: 65 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 66 | 67 | if __name__ == "__main__": 68 | main() 69 | 70 | -------------------------------------------------------------------------------- /work/contrast_rlsnote_eop.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import re 8 | import html 9 | import hashlib 10 | 11 | def main(): 12 | url = 'https://docs.contrastsecurity.jp/ja/release-on-premises.html' 13 | res = req.urlopen(url) 14 | soup = BeautifulSoup(res, 'lxml') 15 | elems = soup.select('section.section') 16 | modified_date = soup.select_one('span.formatted-date').text.strip() 17 | #print(modified_date) 18 | 19 | feed = Rss201rev2Feed( 20 | title='Contrast Release Note(On-premises)', 21 | link='https://contrastsecurity.dev/contrast-documentation-rss', 22 | description='Contrast Release Note(On-premises)', 23 | language='ja', 24 | author_name="Contrast Security Japan G.K.", 25 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/contrast_rlsnote_eop.xml', 26 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 27 | ) 28 | 29 | id_ptn = re.compile(r'^[0-9]{1,2}月-[0-9\-]+-$') 30 | title_ptn = re.compile(r'^[0-9]{1,2}月\([0-9\.]+\)$') 31 | 32 | for elem in elems: 33 | try: 34 | id_str = elem.get("id").strip() 35 | title = elem.select('h3.title')[0].text.strip() 36 | if not id_ptn.search(id_str) or not title_ptn.search(title): 37 | continue 38 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 39 | pubdate = None 40 | if pubdate_str: 41 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 42 | #print(id_str, pubdate_str, title) 43 | desc_buffer = [] 44 | for elem2 in elem.select('section.section'): 45 | id_str2 = elem2.get("id").strip() 46 | #print('- ', elem2.select_one('div.titlepage').text) 47 | desc_buffer.append('%s' % elem2.select_one('div.titlepage').text) 48 | for elem3 in elem2.select('li.listitem'): 49 | #print(' - ', elem3.select_one('p').text) 50 | desc_buffer.append('・%s' % elem3.select_one('p').text) 51 | #print(id_str, elem.get('data-legacy-id')) 52 | #if not title.lower().startswith('java'): 53 | # continue 54 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 55 | url = 'https://docs.contrastsecurity.jp/ja/release-on-premises.html#%s' % id_str 56 | guid = 'https://docs.contrastsecurity.jp/ja/release-on-premises.html#%s' % id_hash 57 | if not '月' in title: 58 | continue 59 | feed.add_item(title=title, link=url, description=''.join(['{0}
'.format(s) for s in desc_buffer]), pubdate=pubdate, unique_id=guid) 60 | except IndexError: 61 | continue 62 | str_val = feed.writeString('utf-8') 63 | dom = xml.dom.minidom.parseString(str_val) 64 | with open('/feeds/contrast_rlsnote_eop.xml','w') as fp: 65 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 66 | 67 | if __name__ == "__main__": 68 | main() 69 | 70 | -------------------------------------------------------------------------------- /.github/workflows/rss.yml: -------------------------------------------------------------------------------- 1 | name: RSS Feed Generator 2 | 3 | on: 4 | schedule: 5 | - cron: "0 0 * * *" 6 | workflow_dispatch: 7 | inputs: 8 | today: 9 | type: string 10 | required: false 11 | description: 'date(YYYY-MM-DD) for end of support check' 12 | 13 | jobs: 14 | rssgen: 15 | runs-on: ubuntu-latest 16 | permissions: 17 | packages: read 18 | pages: write 19 | contents: write 20 | steps: 21 | - name: Checkout code 22 | uses: actions/checkout@v4 23 | - name: Login to Docker 24 | uses: docker/login-action@v3 25 | with: 26 | registry: ghcr.io 27 | username: ${{ github.actor }} 28 | password: ${{ secrets.GITHUB_TOKEN }} 29 | env: 30 | OWNER: ${{ github.repository_owner }} 31 | - name: Pull Docker Image 32 | run: | 33 | docker pull ghcr.io/contrast-security-oss/contrast-documentation-rss/python3:latest 34 | - name: Generate RSS by Python 35 | run: | 36 | docker compose run python3 python java_rlsnote.py 37 | docker compose run python3 python dotnet_framework_rlsnote.py 38 | docker compose run python3 python dotnet_core_rlsnote.py 39 | docker compose run python3 python nodejs_rlsnote.py 40 | docker compose run python3 python nodejs_beta_rlsnote.py 41 | docker compose run python3 python nodejs_protect_rlsnote.py 42 | docker compose run python3 python go_rlsnote.py 43 | docker compose run python3 python python_rlsnote.py 44 | docker compose run python3 python ruby_rlsnote.py 45 | docker compose run python3 python php_rlsnote.py 46 | docker compose run python3 python contrast_rlsnote_saas.py 47 | docker compose run python3 python contrast_rlsnote_eop.py 48 | docker compose run python3 python java_support_tech.py 49 | docker compose run python3 python dotnet_framework_support_tech.py 50 | docker compose run python3 python dotnet_core_support_tech.py 51 | docker compose run python3 python nodejs_support_tech.py 52 | docker compose run python3 python go_support_tech.py 53 | docker compose run python3 python python_support_tech.py 54 | docker compose run python3 python ruby_support_tech.py 55 | docker compose run python3 python php_support_tech.py 56 | docker compose run -e TODAY=${{ github.event.inputs.today }} python3 python agent_end_support_chk.py 57 | - name: Commit updated files 58 | run: | 59 | git config core.filemode false 60 | if ! git diff --exit-code --quiet ./files 61 | then 62 | git config user.name Taka Shiozaki 63 | git config user.email taka.shiozaki@contrastsecurity.com 64 | git commit -m "Commit updated files" ./files/*.txt 65 | git push 66 | fi 67 | - uses: actions/upload-artifact@v4 68 | with: 69 | name: rss_feeds 70 | path: feeds 71 | - uses: actions/upload-pages-artifact@v3 72 | with: 73 | path: feeds 74 | deploy: 75 | needs: rssgen 76 | runs-on: ubuntu-latest 77 | environment: 78 | name: github-pages 79 | url: ${{ steps.deployment.outputs.page_url }} 80 | permissions: 81 | pages: write 82 | id-token: write 83 | steps: 84 | - uses: actions/deploy-pages@v4 85 | id: deployment 86 | 87 | -------------------------------------------------------------------------------- /work/go_support_tech.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import re 8 | import difflib 9 | import shutil 10 | import html 11 | import hashlib 12 | 13 | def main(): 14 | url = 'https://docs.contrastsecurity.jp/ja/go-supported-technologies.html' 15 | res = req.urlopen(url) 16 | soup = BeautifulSoup(res, 'lxml') 17 | elems = soup.select('section.section') 18 | modified_date = soup.select_one('span.formatted-date').text.strip() 19 | title = 'Go' 20 | 21 | feed = Rss201rev2Feed( 22 | title='Go Supported Technologies', 23 | link='https://contrastsecurity.dev/contrast-documentation-rss', 24 | description='Go Supported Technologies', 25 | language='ja', 26 | author_name="Contrast Security Japan G.K.", 27 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/go_support_tech_update.xml', 28 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 29 | ) 30 | 31 | write_flg = False 32 | item_dict = {} 33 | pubdate = None 34 | for elem in elems: 35 | try: 36 | if elem.parent.name == 'div': 37 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 38 | if pubdate_str: 39 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 40 | #continue 41 | id_str = elem.get("id") 42 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 43 | url = 'https://docs.contrastsecurity.jp/ja/go-supported-technologies.html#%s' % id_str 44 | guid = 'https://docs.contrastsecurity.jp/ja/go-supported-technologies.html#%s' % id_hash 45 | text_buffer = [] 46 | for elem2 in elem.select('p'): 47 | text_buffer.append(elem2.text) 48 | with open('/files/%s.tmp' % title,'w') as fp: 49 | fp.write('\n'.join(text_buffer)) 50 | before_text = None 51 | try: 52 | with open('/files/%s.txt' % title,'r') as fp: 53 | before_text = fp.readlines() 54 | except FileNotFoundError: 55 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 56 | continue 57 | after_text = None 58 | with open('/files/%s.tmp' % title,'r') as fp: 59 | after_text = fp.readlines() 60 | res = difflib.unified_diff(before_text, after_text) 61 | res_str = '\n'.join(res) 62 | if (len(res_str.strip()) > 0): 63 | print('Found diff: ', title) 64 | item_dict[title] = (url, res_str, guid) 65 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 66 | except IndexError: 67 | continue 68 | 69 | if pubdate is None: 70 | pubdate = datetime.date.today() 71 | for k, v in item_dict.items(): 72 | feed.add_item(title=k, link=v[0], description=''.join(['{0}
'.format(s) for s in v[1].splitlines()]), pubdate=pubdate, unique_id=v[2]) 73 | 74 | if len(item_dict) > 0: 75 | str_val = feed.writeString('utf-8') 76 | dom = xml.dom.minidom.parseString(str_val) 77 | with open('/feeds/go_support_tech_update.xml','w') as fp: 78 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 79 | 80 | if __name__ == "__main__": 81 | main() 82 | 83 | -------------------------------------------------------------------------------- /work/php_support_tech.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import re 8 | import difflib 9 | import shutil 10 | import html 11 | import hashlib 12 | 13 | def main(): 14 | url = 'https://docs.contrastsecurity.jp/ja/php-supported-technologies.html' 15 | res = req.urlopen(url) 16 | soup = BeautifulSoup(res, 'lxml') 17 | elems = soup.select('section.section') 18 | modified_date = soup.select_one('span.formatted-date').text.strip() 19 | title = 'PHP' 20 | 21 | feed = Rss201rev2Feed( 22 | title='PHP Supported Technologies', 23 | link='https://contrastsecurity.dev/contrast-documentation-rss', 24 | description='PHP Supported Technologies', 25 | language='ja', 26 | author_name="Contrast Security Japan G.K.", 27 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/php_support_tech_update.xml', 28 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 29 | ) 30 | 31 | write_flg = False 32 | item_dict = {} 33 | pubdate = None 34 | for elem in elems: 35 | try: 36 | if elem.parent.name == 'div': 37 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 38 | if pubdate_str: 39 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 40 | #continue 41 | id_str = elem.get("id") 42 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 43 | url = 'https://docs.contrastsecurity.jp/ja/php-supported-technologies.html#%s' % id_str 44 | guid = 'https://docs.contrastsecurity.jp/ja/php-supported-technologies.html#%s' % id_hash 45 | text_buffer = [] 46 | for elem2 in elem.select('p'): 47 | text_buffer.append(elem2.text) 48 | with open('/files/%s.tmp' % title,'w') as fp: 49 | fp.write('\n'.join(text_buffer)) 50 | before_text = None 51 | try: 52 | with open('/files/%s.txt' % title,'r') as fp: 53 | before_text = fp.readlines() 54 | except FileNotFoundError: 55 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 56 | continue 57 | after_text = None 58 | with open('/files/%s.tmp' % title,'r') as fp: 59 | after_text = fp.readlines() 60 | res = difflib.unified_diff(before_text, after_text) 61 | res_str = '\n'.join(res) 62 | if (len(res_str.strip()) > 0): 63 | print('Found diff: ', title) 64 | item_dict[title] = (url, res_str, guid) 65 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 66 | except IndexError: 67 | continue 68 | 69 | if pubdate is None: 70 | pubdate = datetime.date.today() 71 | for k, v in item_dict.items(): 72 | feed.add_item(title=k, link=v[0], description=''.join(['{0}
'.format(s) for s in v[1].splitlines()]), pubdate=pubdate, unique_id=v[2]) 73 | 74 | if len(item_dict) > 0: 75 | str_val = feed.writeString('utf-8') 76 | dom = xml.dom.minidom.parseString(str_val) 77 | with open('/feeds/php_support_tech_update.xml','w') as fp: 78 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 79 | 80 | if __name__ == "__main__": 81 | main() 82 | 83 | -------------------------------------------------------------------------------- /work/ruby_support_tech.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import re 8 | import difflib 9 | import shutil 10 | import html 11 | import hashlib 12 | 13 | def main(): 14 | url = 'https://docs.contrastsecurity.jp/ja/ruby-supported-technologies.html' 15 | res = req.urlopen(url) 16 | soup = BeautifulSoup(res, 'lxml') 17 | elems = soup.select('section.section') 18 | modified_date = soup.select_one('span.formatted-date').text.strip() 19 | title = 'Ruby' 20 | 21 | feed = Rss201rev2Feed( 22 | title='Ruby Supported Technologies', 23 | link='https://contrastsecurity.dev/contrast-documentation-rss', 24 | description='Ruby Supported Technologies', 25 | language='ja', 26 | author_name="Contrast Security Japan G.K.", 27 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/ruby_support_tech_update.xml', 28 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 29 | ) 30 | 31 | write_flg = False 32 | item_dict = {} 33 | pubdate = None 34 | for elem in elems: 35 | try: 36 | if elem.parent.name == 'div': 37 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 38 | if pubdate_str: 39 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 40 | #continue 41 | id_str = elem.get("id") 42 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 43 | url = 'https://docs.contrastsecurity.jp/ja/ruby-supported-technologies.html#%s' % id_str 44 | guid = 'https://docs.contrastsecurity.jp/ja/ruby-supported-technologies.html#%s' % id_hash 45 | text_buffer = [] 46 | for elem2 in elem.select('p'): 47 | text_buffer.append(elem2.text) 48 | with open('/files/%s.tmp' % title,'w') as fp: 49 | fp.write('\n'.join(text_buffer)) 50 | before_text = None 51 | try: 52 | with open('/files/%s.txt' % title,'r') as fp: 53 | before_text = fp.readlines() 54 | except FileNotFoundError: 55 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 56 | continue 57 | after_text = None 58 | with open('/files/%s.tmp' % title,'r') as fp: 59 | after_text = fp.readlines() 60 | res = difflib.unified_diff(before_text, after_text) 61 | res_str = '\n'.join(res) 62 | if (len(res_str.strip()) > 0): 63 | print('Found diff: ', title) 64 | item_dict[title] = (url, res_str, guid) 65 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 66 | except IndexError: 67 | continue 68 | 69 | if pubdate is None: 70 | pubdate = datetime.date.today() 71 | for k, v in item_dict.items(): 72 | feed.add_item(title=k, link=v[0], description=''.join(['{0}
'.format(s) for s in v[1].splitlines()]), pubdate=pubdate, unique_id=v[2]) 73 | 74 | if len(item_dict) > 0: 75 | str_val = feed.writeString('utf-8') 76 | dom = xml.dom.minidom.parseString(str_val) 77 | with open('/feeds/ruby_support_tech_update.xml','w') as fp: 78 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 79 | 80 | if __name__ == "__main__": 81 | main() 82 | 83 | -------------------------------------------------------------------------------- /work/python_support_tech.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import re 8 | import difflib 9 | import shutil 10 | import html 11 | import hashlib 12 | 13 | def main(): 14 | url = 'https://docs.contrastsecurity.jp/ja/python-supported-technologies.html' 15 | res = req.urlopen(url) 16 | soup = BeautifulSoup(res, 'lxml') 17 | elems = soup.select('section.section') 18 | modified_date = soup.select_one('span.formatted-date').text.strip() 19 | title = 'Python' 20 | 21 | feed = Rss201rev2Feed( 22 | title='Python Supported Technologies', 23 | link='https://contrastsecurity.dev/contrast-documentation-rss', 24 | description='Python Supported Technologies', 25 | language='ja', 26 | author_name="Contrast Security Japan G.K.", 27 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/python_support_tech_update.xml', 28 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 29 | ) 30 | 31 | write_flg = False 32 | item_dict = {} 33 | pubdate = None 34 | for elem in elems: 35 | try: 36 | if elem.parent.name == 'div': 37 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 38 | if pubdate_str: 39 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 40 | #continue 41 | id_str = elem.get("id") 42 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 43 | url = 'https://docs.contrastsecurity.jp/ja/python-supported-technologies.html#%s' % id_str 44 | guid = 'https://docs.contrastsecurity.jp/ja/python-supported-technologies.html#%s' % id_hash 45 | text_buffer = [] 46 | for elem2 in elem.select('p'): 47 | text_buffer.append(elem2.text) 48 | with open('/files/%s.tmp' % title,'w') as fp: 49 | fp.write('\n'.join(text_buffer)) 50 | before_text = None 51 | try: 52 | with open('/files/%s.txt' % title,'r') as fp: 53 | before_text = fp.readlines() 54 | except FileNotFoundError: 55 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 56 | continue 57 | after_text = None 58 | with open('/files/%s.tmp' % title,'r') as fp: 59 | after_text = fp.readlines() 60 | res = difflib.unified_diff(before_text, after_text) 61 | res_str = '\n'.join(res) 62 | if (len(res_str.strip()) > 0): 63 | print('Found diff: ', title) 64 | item_dict[title] = (url, res_str, guid) 65 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 66 | except IndexError: 67 | continue 68 | 69 | if pubdate is None: 70 | pubdate = datetime.date.today() 71 | for k, v in item_dict.items(): 72 | feed.add_item(title=k, link=v[0], description=''.join(['{0}
'.format(s) for s in v[1].splitlines()]), pubdate=pubdate, unique_id=v[2]) 73 | 74 | if len(item_dict) > 0: 75 | str_val = feed.writeString('utf-8') 76 | dom = xml.dom.minidom.parseString(str_val) 77 | with open('/feeds/python_support_tech_update.xml','w') as fp: 78 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 79 | 80 | if __name__ == "__main__": 81 | main() 82 | 83 | -------------------------------------------------------------------------------- /work/nodejs_support_tech.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import re 8 | import difflib 9 | import shutil 10 | import html 11 | import hashlib 12 | 13 | def main(): 14 | url = 'https://docs.contrastsecurity.jp/ja/node-js-supported-technologies.html' 15 | res = req.urlopen(url) 16 | soup = BeautifulSoup(res, 'lxml') 17 | elems = soup.select('section.section') 18 | modified_date = soup.select_one('span.formatted-date').text.strip() 19 | title = 'Nodejs' 20 | 21 | feed = Rss201rev2Feed( 22 | title='Node.js Supported Technologies', 23 | link='https://contrastsecurity.dev/contrast-documentation-rss', 24 | description='Node.js Supported Technologies', 25 | language='ja', 26 | author_name="Contrast Security Japan G.K.", 27 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/nodejs_support_tech_update.xml', 28 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 29 | ) 30 | 31 | write_flg = False 32 | item_dict = {} 33 | pubdate = None 34 | for elem in elems: 35 | try: 36 | if elem.parent.name == 'div': 37 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 38 | if pubdate_str: 39 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 40 | #continue 41 | id_str = elem.get("id") 42 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 43 | url = 'https://docs.contrastsecurity.jp/ja/node-js-supported-technologies.html#%s' % id_str 44 | guid = 'https://docs.contrastsecurity.jp/ja/node-js-supported-technologies.html#%s' % id_hash 45 | text_buffer = [] 46 | for elem2 in elem.select('p'): 47 | text_buffer.append(elem2.text) 48 | with open('/files/%s.tmp' % title,'w') as fp: 49 | fp.write('\n'.join(text_buffer)) 50 | before_text = None 51 | try: 52 | with open('/files/%s.txt' % title,'r') as fp: 53 | before_text = fp.readlines() 54 | except FileNotFoundError: 55 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 56 | continue 57 | after_text = None 58 | with open('/files/%s.tmp' % title,'r') as fp: 59 | after_text = fp.readlines() 60 | res = difflib.unified_diff(before_text, after_text) 61 | res_str = '\n'.join(res) 62 | if (len(res_str.strip()) > 0): 63 | print('Found diff: ', title) 64 | item_dict[title] = (url, res_str, guid) 65 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 66 | except IndexError: 67 | continue 68 | 69 | if pubdate is None: 70 | pubdate = datetime.date.today() 71 | for k, v in item_dict.items(): 72 | feed.add_item(title=k, link=v[0], description=''.join(['{0}
'.format(s) for s in v[1].splitlines()]), pubdate=pubdate, unique_id=v[2]) 73 | 74 | if len(item_dict) > 0: 75 | str_val = feed.writeString('utf-8') 76 | dom = xml.dom.minidom.parseString(str_val) 77 | with open('/feeds/nodejs_support_tech_update.xml','w') as fp: 78 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 79 | 80 | if __name__ == "__main__": 81 | main() 82 | 83 | -------------------------------------------------------------------------------- /work/java_support_tech.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime as dt 6 | import locale 7 | import re 8 | import difflib 9 | import shutil 10 | import html 11 | import hashlib 12 | 13 | def main(): 14 | url = 'https://docs.contrastsecurity.jp/ja/java-supported-technologies.html' 15 | res = req.urlopen(url) 16 | soup = BeautifulSoup(res, 'lxml') 17 | elems = soup.select('section.section') 18 | modified_date = soup.select_one('span.formatted-date').text.strip() 19 | 20 | feed = Rss201rev2Feed( 21 | title='Java Supported Technologies', 22 | link='https://contrastsecurity.dev/contrast-documentation-rss', 23 | description='Java Supported Technologies', 24 | language='ja', 25 | author_name="Contrast Security Japan G.K.", 26 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/java_support_tech_update.xml', 27 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 28 | ) 29 | 30 | write_flg = False 31 | item_dict = {} 32 | pubdate = None 33 | for elem in elems: 34 | try: 35 | if elem.parent.name == 'div': 36 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 37 | if pubdate_str: 38 | pubdate = dt.strptime(pubdate_str, '%B %d, %Y') 39 | continue 40 | id_str = elem.get("id") 41 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 42 | url = 'https://docs.contrastsecurity.jp/ja/java-supported-technologies.html#%s' % id_str 43 | guid = 'https://docs.contrastsecurity.jp/ja/java-supported-technologies.html#%s' % id_hash 44 | title = elem.select_one('h2.title').text 45 | text_buffer = [] 46 | for elem2 in elem.select('p'): 47 | text_buffer.append(elem2.text) 48 | with open('/files/%s.tmp' % title,'w') as fp: 49 | fp.write('\n'.join(text_buffer)) 50 | before_text = None 51 | try: 52 | with open('/files/%s.txt' % title,'r') as fp: 53 | before_text = fp.readlines() 54 | except FileNotFoundError: 55 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 56 | continue 57 | after_text = None 58 | with open('/files/%s.tmp' % title,'r') as fp: 59 | after_text = fp.readlines() 60 | res = difflib.unified_diff(before_text, after_text) 61 | res_str = '\n'.join(res) 62 | if (len(res_str.strip()) > 0): 63 | print('Found diff: ', title) 64 | item_dict[title] = (url, res_str, guid) 65 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 66 | except IndexError: 67 | continue 68 | 69 | now_for_pub = dt.today().replace(second=0, microsecond=0) 70 | for k, v in item_dict.items(): 71 | feed.add_item(title=k, link=v[0], description=''.join(['{0}
'.format(s) for s in v[1].splitlines()]), pubdate=now_for_pub, unique_id=v[2]) 72 | 73 | if len(item_dict) > 0: 74 | str_val = feed.writeString('utf-8') 75 | dom = xml.dom.minidom.parseString(str_val) 76 | with open('/feeds/java_support_tech_update.xml','w') as fp: 77 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 78 | 79 | if __name__ == "__main__": 80 | main() 81 | 82 | -------------------------------------------------------------------------------- /work/dotnet_core_support_tech.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import re 8 | import difflib 9 | import shutil 10 | import html 11 | import hashlib 12 | 13 | def main(): 14 | url = 'https://docs.contrastsecurity.jp/ja/-net-core-supported-technologies.html' 15 | res = req.urlopen(url) 16 | soup = BeautifulSoup(res, 'lxml') 17 | elems = soup.select('section.section') 18 | modified_date = soup.select_one('span.formatted-date').text.strip() 19 | title = 'DotNetCore' 20 | 21 | feed = Rss201rev2Feed( 22 | title='.NET Core Supported Technologies', 23 | link='https://contrastsecurity.dev/contrast-documentation-rss', 24 | description='.NET Core Supported Technologies', 25 | language='ja', 26 | author_name="Contrast Security Japan G.K.", 27 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/dotnet_core_support_tech_update.xml', 28 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 29 | ) 30 | 31 | write_flg = False 32 | item_dict = {} 33 | pubdate = None 34 | for elem in elems: 35 | try: 36 | if elem.parent.name == 'div': 37 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 38 | if pubdate_str: 39 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 40 | #continue 41 | id_str = elem.get("id") 42 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 43 | url = 'https://docs.contrastsecurity.jp/ja/-net-core-supported-technologies.html#%s' % id_str 44 | guid = 'https://docs.contrastsecurity.jp/ja/-net-core-supported-technologies.html#%s' % id_hash 45 | text_buffer = [] 46 | for elem2 in elem.select('p'): 47 | text_buffer.append(elem2.text) 48 | with open('/files/%s.tmp' % title,'w') as fp: 49 | fp.write('\n'.join(text_buffer)) 50 | before_text = None 51 | try: 52 | with open('/files/%s.txt' % title,'r') as fp: 53 | before_text = fp.readlines() 54 | except FileNotFoundError: 55 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 56 | continue 57 | after_text = None 58 | with open('/files/%s.tmp' % title,'r') as fp: 59 | after_text = fp.readlines() 60 | res = difflib.unified_diff(before_text, after_text) 61 | res_str = '\n'.join(res) 62 | if (len(res_str.strip()) > 0): 63 | print('Found diff: ', title) 64 | item_dict[title] = (url, res_str, guid) 65 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 66 | except IndexError: 67 | continue 68 | 69 | if pubdate is None: 70 | pubdate = datetime.date.today() 71 | for k, v in item_dict.items(): 72 | feed.add_item(title=k, link=v[0], description=''.join(['{0}
'.format(s) for s in v[1].splitlines()]), pubdate=pubdate, unique_id=v[2]) 73 | 74 | if len(item_dict) > 0: 75 | str_val = feed.writeString('utf-8') 76 | dom = xml.dom.minidom.parseString(str_val) 77 | with open('/feeds/dotnet_core_support_tech_update.xml','w') as fp: 78 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 79 | 80 | if __name__ == "__main__": 81 | main() 82 | 83 | -------------------------------------------------------------------------------- /work/dotnet_framework_support_tech.py: -------------------------------------------------------------------------------- 1 | import urllib.request as req 2 | from bs4 import BeautifulSoup 3 | import xml.dom.minidom 4 | from django.utils.feedgenerator import Rss201rev2Feed 5 | from datetime import datetime 6 | import locale 7 | import re 8 | import difflib 9 | import shutil 10 | import html 11 | import hashlib 12 | 13 | def main(): 14 | url = 'https://docs.contrastsecurity.jp/ja/-net-framework-supported-technologies.html' 15 | res = req.urlopen(url) 16 | soup = BeautifulSoup(res, 'lxml') 17 | elems = soup.select('section.section') 18 | modified_date = soup.select_one('span.formatted-date').text.strip() 19 | title = 'DotNetFramework' 20 | 21 | feed = Rss201rev2Feed( 22 | title='.NET Framework Supported Technologies', 23 | link='https://contrastsecurity.dev/contrast-documentation-rss', 24 | description='.NET Framework Supported Technologies', 25 | language='ja', 26 | author_name="Contrast Security Japan G.K.", 27 | feed_url='https://contrastsecurity.dev/contrast-documentation-rss/dotnet_framework_support_tech_update.xml', 28 | feed_copyright='Copyright 2023 Contrast Security Japan G.K.' 29 | ) 30 | 31 | write_flg = False 32 | item_dict = {} 33 | pubdate = None 34 | for elem in elems: 35 | try: 36 | if elem.parent.name == 'div': 37 | pubdate_str = elem.get("data-time-modified") # November 6, 2023 38 | if pubdate_str: 39 | pubdate = datetime.strptime(pubdate_str, '%B %d, %Y') 40 | #continue 41 | id_str = elem.get("id") 42 | id_hash = hashlib.md5(id_str.encode()).hexdigest() 43 | url = 'https://docs.contrastsecurity.jp/ja/-net-framework-supported-technologies.html#%s' % id_str 44 | guid = 'https://docs.contrastsecurity.jp/ja/-net-framework-supported-technologies.html#%s' % id_hash 45 | text_buffer = [] 46 | for elem2 in elem.select('p'): 47 | text_buffer.append(elem2.text) 48 | with open('/files/%s.tmp' % title,'w') as fp: 49 | fp.write('\n'.join(text_buffer)) 50 | before_text = None 51 | try: 52 | with open('/files/%s.txt' % title,'r') as fp: 53 | before_text = fp.readlines() 54 | except FileNotFoundError: 55 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 56 | continue 57 | after_text = None 58 | with open('/files/%s.tmp' % title,'r') as fp: 59 | after_text = fp.readlines() 60 | res = difflib.unified_diff(before_text, after_text) 61 | res_str = '\n'.join(res) 62 | if (len(res_str.strip()) > 0): 63 | print('Found diff: ', title) 64 | item_dict[title] = (url, res_str, guid) 65 | shutil.move('/files/%s.tmp' % title, '/files/%s.txt' % title) 66 | except IndexError: 67 | continue 68 | 69 | if pubdate is None: 70 | pubdate = datetime.date.today() 71 | for k, v in item_dict.items(): 72 | feed.add_item(title=k, link=v[0], description=''.join(['{0}
'.format(s) for s in v[1].splitlines()]), pubdate=pubdate, unique_id=v[2]) 73 | 74 | if len(item_dict) > 0: 75 | str_val = feed.writeString('utf-8') 76 | dom = xml.dom.minidom.parseString(str_val) 77 | with open('/feeds/dotnet_framework_support_tech_update.xml','w') as fp: 78 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 79 | 80 | if __name__ == "__main__": 81 | main() 82 | 83 | -------------------------------------------------------------------------------- /feeds/index.html: -------------------------------------------------------------------------------- 1 |{0}
'.format(s) for s in v[1].splitlines()]), pubdate=now_for_pub, unique_id=v[2]) 132 | str_val = feed.writeString('utf-8') 133 | dom = xml.dom.minidom.parseString(str_val) 134 | with open('/feeds/end_of_support.xml','w') as fp: 135 | dom.writexml(fp, encoding='utf-8', newl='\n', indent='', addindent=' ') 136 | 137 | with open(path, 'w') as f: 138 | json.dump(versions_dict, f, indent=4) 139 | 140 | if __name__ == "__main__": 141 | main() 142 | 143 | --------------------------------------------------------------------------------