26 |
27 | ## Features
28 |
29 | - 🚚 Export tweets, replies and likes of any user as JSON/CSV/HTML
30 | - 🔖 Export your bookmarks (without the max 800 limit!)
31 | - 💞 Export following, followers list of any user
32 | - 👥 Export list members and subscribers
33 | - 🌪️ Export tweets from home timeline and list timeline
34 | - 🔍 Export search results
35 | - ✉️ Export direct messages
36 | - 📦 Download images and videos from tweets in bulk at original size
37 | - 🚀 No developer account or API key required
38 | - 🛠️ Ship as a UserScript and everything is done in your browser
39 | - 💾 Your data never leaves your computer
40 | - 💚 Completely free and open-source
41 |
42 | ## Installation
43 |
44 | 1. Install the browser extension [Tampermonkey](https://www.tampermonkey.net/) or [Violentmonkey](https://violentmonkey.github.io/)
45 | 2. Click [HERE](https://github.com/prinsss/twitter-web-exporter/releases/latest/download/twitter-web-exporter.user.js) to install the user script
46 |
47 | ## Usage
48 |
49 | Once the script is installed, you can find a floating panel on the left side of the page. Click the 🐈 Cat button to close the panel or open it again. You can also open the control panel by clicking the Tampermonkey/Violentmonkey icon in the browser menu bar and then selecting it from the script menu.
50 |
51 | If you do not see the cat button or the menu options as shown in the image, please check if the script is properly installed and enabled.
52 |
53 | 
54 |
55 | Click the ⚙️ Cog button to open the settings panel. You can change the UI theme and enable/disable features of script here.
56 |
57 | Then open the page that you want to export data from. The script will automatically capture on the following pages:
58 |
59 | - User profile page (tweets, replies, media, likes)
60 | - Bookmark page
61 | - Search results page
62 | - User following/followers page
63 | - List members/subscribers page
64 |
65 | The numbers of captured data will be displayed on the floating panel. Click the ↗️ Arrow button to open the data table view. You can preview the data captured here and select which items should be exported.
66 |
67 | 
68 |
69 | Click "Export Data" to export captured data to the selected file format. Currently, the script supports exporting to JSON, CSV and HTML. The exported file will be downloaded to your computer.
70 |
71 | By checking the "Include all metadata" option, all available fields from the API will be included in the exported file, giving you the most complete dataset. This could significantly increase the size of the exported data.
72 |
73 | Click "Export Media" to bulk download images and videos from tweets.
74 |
75 | All media files will be downloaded at its original size in a zip archive. You can also copy the URLs of the media files if you want to download them with a external download manager.
76 |
77 | Please set a reasonable value for the "Rate limit" option to avoid downloading too many files at once. The default value is 1000 which means the script will wait for 1 second after downloading each file.
78 |
79 | 
80 |
81 | ## Limitation
82 |
83 | The script only works on the web app (twitter.com). It does not work on the mobile app.
84 |
85 | Basically, **the script "sees" what you see on the page**. If you can't see the data on the page, the script can't access it either. For example, Twitter displays only the latest 3200 tweets on the profile page and the script can't export tweets older than that.
86 |
87 | Data on the web page is loaded dynamically, which means the script can't access the data until it is loaded. You need to keep scrolling down to load more data. Make sure that all data is loaded before exporting.
88 |
89 | The export process is not automated (without the help of 3rd party tools). It relies on human interaction to trigger the data fetching process of the Twitter web app. The script itself does not send any request to Twitter API.
90 |
91 | The script does not rely on the official Twitter API and thus does not have the same rate limit. However, the Twitter web app does have its own limit. If you hit that rate limit, try again after a few minutes.
92 |
93 | On the contrary, the script can export data that is not available from the official API. For example, the official API has a 800 limit when accessing the bookmarks. The script can export all bookmarks without that limit until it's restricted by the Twitter web app itself.
94 |
95 | There is also a limitation on downloading media files. Currently, the script downloads pictures and videos to the browser memory and then zip them into a single archive. This could cause the browser to crash if the size of the media files is too large. The maximum archive size it can handle depends on the browser and the available memory of your computer. (2GB on Chrome and 800MB on Firefox)
96 |
97 | ## FAQ
98 |
99 | **Q. How do you get the data?**
100 | A. The script itself does not send any request to Twitter API. It installs an network interceptor to capture the response of GraphQL request that initiated by the Twitter web app. The script then parses the response and extracts data from it.
101 |
102 | **Q. The script captures nothing!**
103 | A. See [Content-Security-Policy (CSP) Issues #19](https://github.com/prinsss/twitter-web-exporter/issues/19).
104 |
105 | **Q. The exported data is incomplete.**
106 | A. The script can only export data that is loaded by the Twitter web app. Since the data is lazy-loaded, you need to keep scrolling down to load more data. For long lists, you may need to scroll down to the bottom of the page to make sure that all data is loaded before exporting.
107 |
108 | **Q. Can the exporting process be automated?**
109 | A. No. At least not without the help of 3rd party tools like auto scrolling.
110 |
111 | **Q. Do I need a developer account?**
112 | A. No. The script does not send any request to Twitter API.
113 |
114 | **Q. Is there an API rate limit?**
115 | A. No. Not until you hit the rate limit of the Twitter web app itself.
116 |
117 | **Q. Will my account be suspended?**
118 | A. Not likely. There is no automatic botting involved and the behavior is similar to manually copying the data from the web page.
119 |
120 | **Q: What about privacy?**
121 | A: Everything is processed on your local browser. No data is sent to the cloud.
122 |
123 | **Q: Why do you build this?**
124 | A: For archival usage. Twitter's archive only contains the numeric user ID of your following/followers which is not human-readable. The archive also does not contain your bookmarks.
125 |
126 | **Q: What's the difference between this and other alternatives?**
127 | A: You don't need a developer account for accessing the Twitter API. You don't need to send your private data to someone's server. The script is completely free and open-source.
128 |
129 | **Q: The script does not work!**
130 | A: A platform upgrade will possibly breaks the script's functionality. Please file an [issue](https://github.com/prinsss/twitter-web-exporter/issues) if you encountered any problem.
131 |
132 | ## License
133 |
134 | [MIT](LICENSE)
135 |
--------------------------------------------------------------------------------
/cliff.toml:
--------------------------------------------------------------------------------
1 | # git-cliff ~ configuration file
2 | # https://git-cliff.org/docs/configuration
3 | #
4 | # Lines starting with "#" are comments.
5 | # Configuration options are organized into tables and keys.
6 | # See documentation for more information on available options.
7 |
8 | [changelog]
9 | # changelog header
10 | header = """
11 | # Changelog\n
12 | All notable changes to this project will be documented in this file.\n
13 | """
14 | # template for the changelog body
15 | # https://keats.github.io/tera/docs/#introduction
16 | body = """
17 | {% if version %}\
18 | {% if previous.version %}\
19 | ## [{{ version | trim_start_matches(pat="v") }}](/compare/{{ previous.version }}..{{ version }}) - {{ timestamp | date(format="%Y-%m-%d") }}
20 | {% else %}\
21 | ## [{{ version | trim_start_matches(pat="v") }}] - {{ timestamp | date(format="%Y-%m-%d") }}
22 | {% endif %}\
23 | {% else %}\
24 | ## [unreleased]
25 | {% endif %}\
26 |
27 | {% macro commit(commit) -%}
28 | - {% if commit.scope %}*({{ commit.scope }})* {% endif %}{% if commit.breaking %}[**breaking**] {% endif %}\
29 | {{ commit.message | upper_first }} - ([{{ commit.id | truncate(length=7, end="") }}](/commit/{{ commit.id }}))\
30 | {% endmacro -%}
31 |
32 | {% for group, commits in commits | group_by(attribute="group") %}
33 | ### {{ group | striptags | trim | upper_first }}
34 | {% for commit in commits
35 | | filter(attribute="scope")
36 | | sort(attribute="scope") %}
37 | {{ self::commit(commit=commit) }}
38 | {%- endfor -%}
39 | {% raw %}\n{% endraw %}\
40 | {%- for commit in commits %}
41 | {%- if not commit.scope -%}
42 | {{ self::commit(commit=commit) }}
43 | {% endif -%}
44 | {% endfor -%}
45 | {% endfor %}\n
46 | """
47 | # template for the changelog footer
48 | footer = """
49 |
50 | """
51 | # remove the leading and trailing whitespace from the templates
52 | trim = true
53 | # postprocessors
54 | postprocessors = [
55 | { pattern = '', replace = "https://github.com/prinsss/twitter-web-exporter" }, # replace repository URL
56 | ]
57 |
58 | [git]
59 | # parse the commits based on https://www.conventionalcommits.org
60 | conventional_commits = true
61 | # filter out the commits that are not conventional
62 | filter_unconventional = true
63 | # process each line of a commit as an individual commit
64 | split_commits = false
65 | # regex for preprocessing the commit messages
66 | commit_preprocessors = [
67 | { pattern = '\((\w+\s)?#([0-9]+)\)', replace = "([#${2}](/issues/${2}))" },
68 | # Check spelling of the commit with https://github.com/crate-ci/typos
69 | # If the spelling is incorrect, it will be automatically fixed.
70 | # { pattern = '.*', replace_command = 'typos --write-changes -' },
71 | ]
72 | # regex for parsing and grouping commits
73 | commit_parsers = [
74 | { message = "^feat", group = "⛰️ Features" },
75 | { message = "^fix", group = "🐛 Bug Fixes" },
76 | { message = "^doc", group = "📚 Documentation" },
77 | { message = "^perf", group = "⚡ Performance" },
78 | { message = "^refactor", group = "🚜 Refactor" },
79 | { message = "^style", group = "🎨 Styling" },
80 | { message = "^test", group = "🧪 Testing" },
81 | { message = "^chore\\(release\\): prepare for", skip = true },
82 | { message = "^chore\\(deps\\)", skip = true },
83 | { message = "^chore\\(pr\\)", skip = true },
84 | { message = "^chore\\(pull\\)", skip = true },
85 | { message = "^chore: bump version", skip = true },
86 | { message = "^chore|ci", group = "⚙️ Miscellaneous Tasks" },
87 | { body = ".*security", group = "🛡️ Security" },
88 | { message = "^revert", group = "◀️ Revert" },
89 | ]
90 | # protect breaking changes from being skipped due to matching a skipping commit_parser
91 | protect_breaking_commits = false
92 | # filter out the commits that are not matched by commit parsers
93 | filter_commits = false
94 | # regex for matching git tags
95 | tag_pattern = "v[0-9].*"
96 | # regex for skipping tags
97 | skip_tags = "beta|alpha"
98 | # regex for ignoring tags
99 | ignore_tags = "rc"
100 | # sort the tags topologically
101 | topo_order = false
102 | # sort the commits inside sections by oldest/newest order
103 | sort_commits = "newest"
104 |
--------------------------------------------------------------------------------
/docs/01-user-interface.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/prinsss/twitter-web-exporter/0308628206fabc5c299566073789de899b82f9d0/docs/01-user-interface.png
--------------------------------------------------------------------------------
/docs/02-export-media.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/prinsss/twitter-web-exporter/0308628206fabc5c299566073789de899b82f9d0/docs/02-export-media.png
--------------------------------------------------------------------------------
/docs/03-menu-commands.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/prinsss/twitter-web-exporter/0308628206fabc5c299566073789de899b82f9d0/docs/03-menu-commands.png
--------------------------------------------------------------------------------
/docs/README.zh-Hans.md:
--------------------------------------------------------------------------------
1 |
97 | {t(
98 | 'Export captured data as JSON/HTML/CSV file. This may take a while depending on the amount of data. The exported file does not include media files such as images and videos but only the URLs.',
99 | )}
100 |