├── .gitignore
├── LICENSE.md
├── README.md
├── assets
└── citation_world_map.png
├── citation_map
├── __init__.py
├── citation_map.py
└── scholarly_support.py
├── demo
└── demo.py
└── setup.py
/.gitignore:
--------------------------------------------------------------------------------
1 | .vscode
2 | data
3 | **/__pycache__
4 |
5 | # Models
6 | **/*.pt
7 | **/*.pkl
8 | **/*.pth
9 | **/*.pth.tar
10 |
11 | # Data
12 | **/*.npy
13 | **/*.npz
14 | **/*.csv
15 | **/*.html
--------------------------------------------------------------------------------
/LICENSE.md:
--------------------------------------------------------------------------------
1 | Attribution-NonCommercial-ShareAlike 4.0 International
2 |
3 | =======================================================================
4 |
5 | Creative Commons Corporation ("Creative Commons") is not a law firm and
6 | does not provide legal services or legal advice. Distribution of
7 | Creative Commons public licenses does not create a lawyer-client or
8 | other relationship. Creative Commons makes its licenses and related
9 | information available on an "as-is" basis. Creative Commons gives no
10 | warranties regarding its licenses, any material licensed under their
11 | terms and conditions, or any related information. Creative Commons
12 | disclaims all liability for damages resulting from their use to the
13 | fullest extent possible.
14 |
15 | Using Creative Commons Public Licenses
16 |
17 | Creative Commons public licenses provide a standard set of terms and
18 | conditions that creators and other rights holders may use to share
19 | original works of authorship and other material subject to copyright
20 | and certain other rights specified in the public license below. The
21 | following considerations are for informational purposes only, are not
22 | exhaustive, and do not form part of our licenses.
23 |
24 | Considerations for licensors: Our public licenses are
25 | intended for use by those authorized to give the public
26 | permission to use material in ways otherwise restricted by
27 | copyright and certain other rights. Our licenses are
28 | irrevocable. Licensors should read and understand the terms
29 | and conditions of the license they choose before applying it.
30 | Licensors should also secure all rights necessary before
31 | applying our licenses so that the public can reuse the
32 | material as expected. Licensors should clearly mark any
33 | material not subject to the license. This includes other CC-
34 | licensed material, or material used under an exception or
35 | limitation to copyright. More considerations for licensors:
36 | wiki.creativecommons.org/Considerations_for_licensors
37 |
38 | Considerations for the public: By using one of our public
39 | licenses, a licensor grants the public permission to use the
40 | licensed material under specified terms and conditions. If
41 | the licensor's permission is not necessary for any reason--for
42 | example, because of any applicable exception or limitation to
43 | copyright--then that use is not regulated by the license. Our
44 | licenses grant only permissions under copyright and certain
45 | other rights that a licensor has authority to grant. Use of
46 | the licensed material may still be restricted for other
47 | reasons, including because others have copyright or other
48 | rights in the material. A licensor may make special requests,
49 | such as asking that all changes be marked or described.
50 | Although not required by our licenses, you are encouraged to
51 | respect those requests where reasonable. More considerations
52 | for the public:
53 | wiki.creativecommons.org/Considerations_for_licensees
54 |
55 | =======================================================================
56 |
57 | Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
58 | Public License
59 |
60 | By exercising the Licensed Rights (defined below), You accept and agree
61 | to be bound by the terms and conditions of this Creative Commons
62 | Attribution-NonCommercial-ShareAlike 4.0 International Public License
63 | ("Public License"). To the extent this Public License may be
64 | interpreted as a contract, You are granted the Licensed Rights in
65 | consideration of Your acceptance of these terms and conditions, and the
66 | Licensor grants You such rights in consideration of benefits the
67 | Licensor receives from making the Licensed Material available under
68 | these terms and conditions.
69 |
70 |
71 | Section 1 -- Definitions.
72 |
73 | a. Adapted Material means material subject to Copyright and Similar
74 | Rights that is derived from or based upon the Licensed Material
75 | and in which the Licensed Material is translated, altered,
76 | arranged, transformed, or otherwise modified in a manner requiring
77 | permission under the Copyright and Similar Rights held by the
78 | Licensor. For purposes of this Public License, where the Licensed
79 | Material is a musical work, performance, or sound recording,
80 | Adapted Material is always produced where the Licensed Material is
81 | synched in timed relation with a moving image.
82 |
83 | b. Adapter's License means the license You apply to Your Copyright
84 | and Similar Rights in Your contributions to Adapted Material in
85 | accordance with the terms and conditions of this Public License.
86 |
87 | c. BY-NC-SA Compatible License means a license listed at
88 | creativecommons.org/compatiblelicenses, approved by Creative
89 | Commons as essentially the equivalent of this Public License.
90 |
91 | d. Copyright and Similar Rights means copyright and/or similar rights
92 | closely related to copyright including, without limitation,
93 | performance, broadcast, sound recording, and Sui Generis Database
94 | Rights, without regard to how the rights are labeled or
95 | categorized. For purposes of this Public License, the rights
96 | specified in Section 2(b)(1)-(2) are not Copyright and Similar
97 | Rights.
98 |
99 | e. Effective Technological Measures means those measures that, in the
100 | absence of proper authority, may not be circumvented under laws
101 | fulfilling obligations under Article 11 of the WIPO Copyright
102 | Treaty adopted on December 20, 1996, and/or similar international
103 | agreements.
104 |
105 | f. Exceptions and Limitations means fair use, fair dealing, and/or
106 | any other exception or limitation to Copyright and Similar Rights
107 | that applies to Your use of the Licensed Material.
108 |
109 | g. License Elements means the license attributes listed in the name
110 | of a Creative Commons Public License. The License Elements of this
111 | Public License are Attribution, NonCommercial, and ShareAlike.
112 |
113 | h. Licensed Material means the artistic or literary work, database,
114 | or other material to which the Licensor applied this Public
115 | License.
116 |
117 | i. Licensed Rights means the rights granted to You subject to the
118 | terms and conditions of this Public License, which are limited to
119 | all Copyright and Similar Rights that apply to Your use of the
120 | Licensed Material and that the Licensor has authority to license.
121 |
122 | j. Licensor means the individual(s) or entity(ies) granting rights
123 | under this Public License.
124 |
125 | k. NonCommercial means not primarily intended for or directed towards
126 | commercial advantage or monetary compensation. For purposes of
127 | this Public License, the exchange of the Licensed Material for
128 | other material subject to Copyright and Similar Rights by digital
129 | file-sharing or similar means is NonCommercial provided there is
130 | no payment of monetary compensation in connection with the
131 | exchange.
132 |
133 | l. Share means to provide material to the public by any means or
134 | process that requires permission under the Licensed Rights, such
135 | as reproduction, public display, public performance, distribution,
136 | dissemination, communication, or importation, and to make material
137 | available to the public including in ways that members of the
138 | public may access the material from a place and at a time
139 | individually chosen by them.
140 |
141 | m. Sui Generis Database Rights means rights other than copyright
142 | resulting from Directive 96/9/EC of the European Parliament and of
143 | the Council of 11 March 1996 on the legal protection of databases,
144 | as amended and/or succeeded, as well as other essentially
145 | equivalent rights anywhere in the world.
146 |
147 | n. You means the individual or entity exercising the Licensed Rights
148 | under this Public License. Your has a corresponding meaning.
149 |
150 |
151 | Section 2 -- Scope.
152 |
153 | a. License grant.
154 |
155 | 1. Subject to the terms and conditions of this Public License,
156 | the Licensor hereby grants You a worldwide, royalty-free,
157 | non-sublicensable, non-exclusive, irrevocable license to
158 | exercise the Licensed Rights in the Licensed Material to:
159 |
160 | a. reproduce and Share the Licensed Material, in whole or
161 | in part, for NonCommercial purposes only; and
162 |
163 | b. produce, reproduce, and Share Adapted Material for
164 | NonCommercial purposes only.
165 |
166 | 2. Exceptions and Limitations. For the avoidance of doubt, where
167 | Exceptions and Limitations apply to Your use, this Public
168 | License does not apply, and You do not need to comply with
169 | its terms and conditions.
170 |
171 | 3. Term. The term of this Public License is specified in Section
172 | 6(a).
173 |
174 | 4. Media and formats; technical modifications allowed. The
175 | Licensor authorizes You to exercise the Licensed Rights in
176 | all media and formats whether now known or hereafter created,
177 | and to make technical modifications necessary to do so. The
178 | Licensor waives and/or agrees not to assert any right or
179 | authority to forbid You from making technical modifications
180 | necessary to exercise the Licensed Rights, including
181 | technical modifications necessary to circumvent Effective
182 | Technological Measures. For purposes of this Public License,
183 | simply making modifications authorized by this Section 2(a)
184 | (4) never produces Adapted Material.
185 |
186 | 5. Downstream recipients.
187 |
188 | a. Offer from the Licensor -- Licensed Material. Every
189 | recipient of the Licensed Material automatically
190 | receives an offer from the Licensor to exercise the
191 | Licensed Rights under the terms and conditions of this
192 | Public License.
193 |
194 | b. Additional offer from the Licensor -- Adapted Material.
195 | Every recipient of Adapted Material from You
196 | automatically receives an offer from the Licensor to
197 | exercise the Licensed Rights in the Adapted Material
198 | under the conditions of the Adapter's License You apply.
199 |
200 | c. No downstream restrictions. You may not offer or impose
201 | any additional or different terms or conditions on, or
202 | apply any Effective Technological Measures to, the
203 | Licensed Material if doing so restricts exercise of the
204 | Licensed Rights by any recipient of the Licensed
205 | Material.
206 |
207 | 6. No endorsement. Nothing in this Public License constitutes or
208 | may be construed as permission to assert or imply that You
209 | are, or that Your use of the Licensed Material is, connected
210 | with, or sponsored, endorsed, or granted official status by,
211 | the Licensor or others designated to receive attribution as
212 | provided in Section 3(a)(1)(A)(i).
213 |
214 | b. Other rights.
215 |
216 | 1. Moral rights, such as the right of integrity, are not
217 | licensed under this Public License, nor are publicity,
218 | privacy, and/or other similar personality rights; however, to
219 | the extent possible, the Licensor waives and/or agrees not to
220 | assert any such rights held by the Licensor to the limited
221 | extent necessary to allow You to exercise the Licensed
222 | Rights, but not otherwise.
223 |
224 | 2. Patent and trademark rights are not licensed under this
225 | Public License.
226 |
227 | 3. To the extent possible, the Licensor waives any right to
228 | collect royalties from You for the exercise of the Licensed
229 | Rights, whether directly or through a collecting society
230 | under any voluntary or waivable statutory or compulsory
231 | licensing scheme. In all other cases the Licensor expressly
232 | reserves any right to collect such royalties, including when
233 | the Licensed Material is used other than for NonCommercial
234 | purposes.
235 |
236 |
237 | Section 3 -- License Conditions.
238 |
239 | Your exercise of the Licensed Rights is expressly made subject to the
240 | following conditions.
241 |
242 | a. Attribution.
243 |
244 | 1. If You Share the Licensed Material (including in modified
245 | form), You must:
246 |
247 | a. retain the following if it is supplied by the Licensor
248 | with the Licensed Material:
249 |
250 | i. identification of the creator(s) of the Licensed
251 | Material and any others designated to receive
252 | attribution, in any reasonable manner requested by
253 | the Licensor (including by pseudonym if
254 | designated);
255 |
256 | ii. a copyright notice;
257 |
258 | iii. a notice that refers to this Public License;
259 |
260 | iv. a notice that refers to the disclaimer of
261 | warranties;
262 |
263 | v. a URI or hyperlink to the Licensed Material to the
264 | extent reasonably practicable;
265 |
266 | b. indicate if You modified the Licensed Material and
267 | retain an indication of any previous modifications; and
268 |
269 | c. indicate the Licensed Material is licensed under this
270 | Public License, and include the text of, or the URI or
271 | hyperlink to, this Public License.
272 |
273 | 2. You may satisfy the conditions in Section 3(a)(1) in any
274 | reasonable manner based on the medium, means, and context in
275 | which You Share the Licensed Material. For example, it may be
276 | reasonable to satisfy the conditions by providing a URI or
277 | hyperlink to a resource that includes the required
278 | information.
279 | 3. If requested by the Licensor, You must remove any of the
280 | information required by Section 3(a)(1)(A) to the extent
281 | reasonably practicable.
282 |
283 | b. ShareAlike.
284 |
285 | In addition to the conditions in Section 3(a), if You Share
286 | Adapted Material You produce, the following conditions also apply.
287 |
288 | 1. The Adapter's License You apply must be a Creative Commons
289 | license with the same License Elements, this version or
290 | later, or a BY-NC-SA Compatible License.
291 |
292 | 2. You must include the text of, or the URI or hyperlink to, the
293 | Adapter's License You apply. You may satisfy this condition
294 | in any reasonable manner based on the medium, means, and
295 | context in which You Share Adapted Material.
296 |
297 | 3. You may not offer or impose any additional or different terms
298 | or conditions on, or apply any Effective Technological
299 | Measures to, Adapted Material that restrict exercise of the
300 | rights granted under the Adapter's License You apply.
301 |
302 |
303 | Section 4 -- Sui Generis Database Rights.
304 |
305 | Where the Licensed Rights include Sui Generis Database Rights that
306 | apply to Your use of the Licensed Material:
307 |
308 | a. for the avoidance of doubt, Section 2(a)(1) grants You the right
309 | to extract, reuse, reproduce, and Share all or a substantial
310 | portion of the contents of the database for NonCommercial purposes
311 | only;
312 |
313 | b. if You include all or a substantial portion of the database
314 | contents in a database in which You have Sui Generis Database
315 | Rights, then the database in which You have Sui Generis Database
316 | Rights (but not its individual contents) is Adapted Material,
317 | including for purposes of Section 3(b); and
318 |
319 | c. You must comply with the conditions in Section 3(a) if You Share
320 | all or a substantial portion of the contents of the database.
321 |
322 | For the avoidance of doubt, this Section 4 supplements and does not
323 | replace Your obligations under this Public License where the Licensed
324 | Rights include other Copyright and Similar Rights.
325 |
326 |
327 | Section 5 -- Disclaimer of Warranties and Limitation of Liability.
328 |
329 | a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
330 | EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
331 | AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
332 | ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
333 | IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
334 | WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
335 | PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
336 | ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
337 | KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
338 | ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
339 |
340 | b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
341 | TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
342 | NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
343 | INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
344 | COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
345 | USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
346 | ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
347 | DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
348 | IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
349 |
350 | c. The disclaimer of warranties and limitation of liability provided
351 | above shall be interpreted in a manner that, to the extent
352 | possible, most closely approximates an absolute disclaimer and
353 | waiver of all liability.
354 |
355 |
356 | Section 6 -- Term and Termination.
357 |
358 | a. This Public License applies for the term of the Copyright and
359 | Similar Rights licensed here. However, if You fail to comply with
360 | this Public License, then Your rights under this Public License
361 | terminate automatically.
362 |
363 | b. Where Your right to use the Licensed Material has terminated under
364 | Section 6(a), it reinstates:
365 |
366 | 1. automatically as of the date the violation is cured, provided
367 | it is cured within 30 days of Your discovery of the
368 | violation; or
369 |
370 | 2. upon express reinstatement by the Licensor.
371 |
372 | For the avoidance of doubt, this Section 6(b) does not affect any
373 | right the Licensor may have to seek remedies for Your violations
374 | of this Public License.
375 |
376 | c. For the avoidance of doubt, the Licensor may also offer the
377 | Licensed Material under separate terms or conditions or stop
378 | distributing the Licensed Material at any time; however, doing so
379 | will not terminate this Public License.
380 |
381 | d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
382 | License.
383 |
384 |
385 | Section 7 -- Other Terms and Conditions.
386 |
387 | a. The Licensor shall not be bound by any additional or different
388 | terms or conditions communicated by You unless expressly agreed.
389 |
390 | b. Any arrangements, understandings, or agreements regarding the
391 | Licensed Material not stated herein are separate from and
392 | independent of the terms and conditions of this Public License.
393 |
394 |
395 | Section 8 -- Interpretation.
396 |
397 | a. For the avoidance of doubt, this Public License does not, and
398 | shall not be interpreted to, reduce, limit, restrict, or impose
399 | conditions on any use of the Licensed Material that could lawfully
400 | be made without permission under this Public License.
401 |
402 | b. To the extent possible, if any provision of this Public License is
403 | deemed unenforceable, it shall be automatically reformed to the
404 | minimum extent necessary to make it enforceable. If the provision
405 | cannot be reformed, it shall be severed from this Public License
406 | without affecting the enforceability of the remaining terms and
407 | conditions.
408 |
409 | c. No term or condition of this Public License will be waived and no
410 | failure to comply consented to unless expressly agreed to by the
411 | Licensor.
412 |
413 | d. Nothing in this Public License constitutes or may be interpreted
414 | as a limitation upon, or waiver of, any privileges and immunities
415 | that apply to the Licensor or You, including from the legal
416 | processes of any jurisdiction or authority.
417 |
418 | =======================================================================
419 |
420 | Creative Commons is not a party to its public
421 | licenses. Notwithstanding, Creative Commons may elect to apply one of
422 | its public licenses to material it publishes and in those instances
423 | will be considered the “Licensor.” The text of the Creative Commons
424 | public licenses is dedicated to the public domain under the CC0 Public
425 | Domain Dedication. Except for the limited purpose of indicating that
426 | material is shared under a Creative Commons public license or as
427 | otherwise permitted by the Creative Commons policies published at
428 | creativecommons.org/policies, Creative Commons does not authorize the
429 | use of the trademark "Creative Commons" or any other trademark or logo
430 | of Creative Commons without its prior written consent including,
431 | without limitation, in connection with any unauthorized modifications
432 | to any of its public licenses or any other arrangements,
433 | understandings, or agreements concerning use of licensed material. For
434 | the avoidance of doubt, this paragraph does not form part of the
435 | public licenses.
436 |
437 | Creative Commons may be contacted at creativecommons.org.
438 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 | Google Scholar Citation World Map
3 |
4 |
5 |
6 | CitationMap: A Python Tool to Identify and Visualize Your Google Scholar Citations Around the World [PDF]
7 |
8 |
9 |
10 |
11 | [](https://openreview.net/pdf?id=BqJgCgl1IA)
12 | [](https://www.techrxiv.org/users/809001/articles/1213717-citationmap-a-python-tool-to-identify-and-visualize-your-google-scholar-citations-around-the-world)
13 | [](https://twitter.com/ChenLiu_1996)
14 | [](https://www.linkedin.com/in/chenliu1996/)
15 | [![CC BY-NC-SA 4.0][cc-by-nc-sa-shield]][cc-by-nc-sa]
16 | [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
17 |
18 | [](https://pypi.org/project/citation-map/)
19 | [](https://pepy.tech/projects/citation-map)
20 | [](https://pypistats.org/packages/citation-map)
21 |
22 | [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
23 | [cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
24 | [cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
25 |
26 |
27 |
28 | ⚠ There is **no need to fork** this repo unless you want to make custom changes.
29 |
30 | ⚠ It only takes **one line to install** and **almost one line to run**!
31 |
32 |
33 |
34 | I am [Chen Liu](https://chenliu-1996.github.io/), a CS PhD Candidate at Yale University.
35 |
36 | Research areas: Machine Learning (Manifold Learning, Spatial-Temporal Dynamics, Medical Vision, AI4Science).
37 |
38 |
39 | ## Purpose
40 | This is a simple Python tool to generate an HTML citation world map from your Google Scholar ID.
41 |
42 | It is easy to install (`pip install citation-map`, available on [PyPI](https://pypi.org/project/citation-map/)) and easy to use (see the [User Guide](https://github.com/ChenLiu-1996/CitationMap?tab=readme-ov-file#user-guide)).
43 |
44 | ## Expected Outcome
45 | You will be given an **HTML file** as the output of the script.
46 |
47 | If you open it on a browser, you will see your own version of the following citation world map.
48 |
49 |
50 |
51 | Besides, there will be a **CSV file** recording citation information (citing author, citing paper, cited paper, affiliation, detailed location).
52 |
53 | **Disclaimer:** It is reasonable for this tool to make some minor mistakes: missing a few citing authors, dropping a couple of pins in wrong locations, etc. If you care a lot about ensuring all citing authors' affiliations are included and accurately marked, you could try manually annotating on [Google My Maps](https://www.google.com/maps/d/) instead. This tool is meant to help people who cannot tolerate this painful process, especailly when they have a decent number of citations.
54 |
55 | **NOTE:** **Now you can trade off between affiliation precision and recall** with the `affiliation_conservative` option. If set to True, we will use the Google Scholar verified official organization name from the citing authors. This is a very conservative approach, since (1) the author needs to self-report it in the affiliation panel, (2) the author needs to verify with the matching email address, and (3) the organization needs to be recorded by Google Scholar. For example, Meta (the company) is not in the list. Many thanks to [Zhijian Liu](https://github.com/zhijian-liu) for the [helpful discussion](https://github.com/ChenLiu-1996/CitationMap/issues/8).
56 |
57 | ## Citation
58 | BibTeX
59 | ```
60 | @article{citationmap,
61 | title={CitationMap: A Python Tool to Identify and Visualize Your Google Scholar Citations Around the World},
62 | author={Liu, Chen},
63 | journal={Authorea Preprints},
64 | year={2024},
65 | publisher={Authorea}
66 | }
67 | ```
68 | MLA
69 | ```
70 | Liu, Chen. "CitationMap: A Python Tool to Identify and Visualize Your Google Scholar Citations Around the World." Authorea Preprints (2024).
71 | ```
72 | APA
73 | ```
74 | Liu, C. (2024). CitationMap: A Python Tool to Identify and Visualize Your Google Scholar Citations Around the World. Authorea Preprints.
75 | ```
76 | Chicago
77 | ```
78 | Liu, Chen. "CitationMap: A Python Tool to Identify and Visualize Your Google Scholar Citations Around the World." Authorea Preprints (2024).
79 | ```
80 |
81 | ## News
82 | **[Asking for advice]**
83 | 1. This is my first time dealing with webscraping/crawling. Users have reported stability issues, and I suspect the major problems are (1) being caught by CAPTCHA or robot check, and (2) being flagged for blacklist by Google Scholar. If you are experienced in these areas and have good advice, I would highly appreciate a **GitHub issue or a pull request**.
84 |
85 | [Aug 2, 2024] Version 4.0 released >>> Logic update. A new input argument `affiliation_conservative`. If set to True, we will use **a very conservative approach to identify affiliations** which leads to **much higher precision and lower recall**. Many thanks to [Zhijian Liu](https://github.com/zhijian-liu) for the [helpful discussion](https://github.com/ChenLiu-1996/CitationMap/issues/8).
86 |
87 | [Jul 28, 2024] Version 3.10 released >>> Logic update. Tested on a professor's profile **with 10,000 citations**!
88 |
89 | [Jul 27, 2024] Version 2.0 released >>> 10x speedup with multiprocessing (1 hour to 5 minutes for my profile).
90 |
91 | [Jul 26, 2024] Version 1.0 released >>> First working version for my profile with 100 citations.
92 |
93 | ## User Guide
94 | 0. If you are new to Python, you probably want to start with a distribution of Python that also helps with environment management (such as [anaconda](https://www.anaconda.com/)). Once you have set up your environment (for example, when you reach the stage of `conda activate env39` in [this tutorial](https://www.youtube.com/watch?v=MUZtVEDKXsk&t=242s)), you can move on to the next step.
95 | 1. Install this tool by running the following line in your conda-accessible command line.
96 | ```
97 | pip install citation-map --upgrade
98 | ```
99 |
100 | 2. Find your Google Scholar ID.
101 |
102 | - Open your Google Scholar profile. The URL should take the form of `https://scholar.google.com/citations?user=GOOGLE_SCHOLAR_ID`. In this case, your Google Scholar ID is just the string `GOOGLE_SCHOLAR_ID`.
103 | - Please kindly ignore configuration strings such as `&hl=en` (host language is English) or `&sortby=pubdate` (sort the works by date of publication).
104 | - **NOTE**: If you have publications/patents that you **_manually added into the Google Scholar page_**, you might want to temporarily delete them while you run this tool. They might cause errors due to incompatibility.
105 |
106 | 3. In an empty Python script, run the following.
107 |
108 | - **NOTE 1**: Please **DO NOT** name your script `citation_map.py` which would cause circular import as this package itself shares the same name. Call it something else: e.g., `run_citation_map.py`, `run.py`, etc. See [Issue #2](https://github.com/ChenLiu-1996/CitationMap/issues/2).
109 | - **NOTE 2**: Protecting with `if __name__ == '__main__'` seems necessary to avoid multiprocessing problems, and it is a good practice anyways.
110 | ```
111 | from citation_map import generate_citation_map
112 |
113 | if __name__ == '__main__':
114 | # This is my Google Scholar ID. Replace this with your ID.
115 | scholar_id = '3rDjnykAAAAJ'
116 | generate_citation_map(scholar_id)
117 | ```
118 |
119 | Note that in Version 4.5, we support manual edit of the csv. You can run `generate_citation_map` first, and get the csv and HTML. Then, you can manually modify the csv content, and generate an updated HTML by running `generate_citation_map` again with `parse_csv=True`.
120 |
121 | Note that in Version 4.0, we will cache the results before identifying affiliations. So if you want to rerun the same author from scratch, you need to delete the cache (default location is 'cache').
122 |
123 | More input arguments are shown in [the demo script](https://github.com/ChenLiu-1996/CitationMap/blob/main/demo/demo.py).
124 |
125 | You can take a look at the input arguments (listed below) of the function `generate_citation_map` in case you need those functionalities.
126 |
127 | ```
128 | Parameters
129 | ----
130 | scholar_id: str
131 | Your Google Scholar ID.
132 | output_path: str
133 | (default is 'citation_map.html')
134 | The path to the output HTML file.
135 | csv_output_path: str
136 | (default is 'citation_info.csv')
137 | The path to the output csv file.
138 | parse_csv: bool
139 | (default is False)
140 | If True, will directly jump to Step 5.2, using the information loaded from the csv.
141 | cache_folder: str
142 | (default is 'cache')
143 | The folder to save intermediate results, after finding (author, paper) but before finding the affiliations.
144 | This is because the user might want to try the aggressive vs. conservative approach.
145 | Set to None if you do not want caching.
146 | affiliation_conservative: bool
147 | (default is False)
148 | If true, we will use a more conservative approach to identify affiliations.
149 | If false, we will use a more aggressive approach to identify affiliations.
150 | num_processes: int
151 | (default is 16)
152 | Number of processes for parallel processing.
153 | use_proxy: bool
154 | (default is False)
155 | If true, we will use a scholarly proxy.
156 | It is necessary for some environments to avoid blocks, but it usually makes things slower.
157 | pin_colorful: bool
158 | (default is True)
159 | If true, the location pins will have a variety of colors.
160 | Otherwise, it will only have one color.
161 | print_citing_affiliations: bool
162 | (default is True)
163 | If true, print the list of citing affiliations (affiliations of citing authors).
164 | ```
165 |
166 | ## Limitations
167 |
168 | 1. This tool is purely based on Google Scholar. As a result, you are expected to have underestimations due to reasons such as:
169 | - Your Google Scholar profile is not up-to-date.
170 | - Some papers citing you are not indexed by Google Scholar.
171 | - Some authors citing you do not have Google Scholar profiles.
172 | - Some authors citing you do not report their affiliations.
173 | 2. Webscraping is performed, and CAPTCHA or robot check can often get us, especially if we crawl frequently. This is more often seen in highly cited users. Unless you are blocked by Google Scholar, at worst you will end up with missing several citing authors, which is not likely a huge deal for highly cited users anyways.
174 | 3. Affiliation identification and geolocating issues. This is a joint effect between affiliation identification and geolocating. The number of citing affiliations will be:
175 | - Underestimated if some affiliations are not found by `geopy.geocoders`.
176 | - Underestimated if we experience communication errors with `geopy.geocoders`.
177 | - (Aggressive approach only) Overestimated if non-affiliation phrases are incorrectly identified as locations by `geopy.geocoders`.
178 | - (Conservative approach only) Underestimated if the citer did not verify with an email address under a matching affiliation domain.
179 | - (Conservative approach only) Underestimated because all non-primary (with verified email address) affiliations are ignored.
180 |
181 | **Please raise an issue or submit a pull request if you have some good idea to better process the affiliation string. Note that currently I am not considering any paid service or tools that pose extra burden on the users, such as GPT API.**
182 |
183 |
184 | ## Debug
185 |
186 | 1. `MaxTriesExceededException` or `Exception: Failed to fetch the Google Scholar page` or (`[WARNING!] Blocked by CAPTCHA or robot check` for all entries).
187 |
188 | - From my experience, these are good indicators that your IP address is blocked by Google Scholar due to excessive crawling (using the `scholarly` package).
189 | - One hot fix I found was to hop on a University VPN and run again. I typically experience this error after running the tool twice, and I need to disconnect and reconnect my VPN to "unblock" myself.
190 | - In case this does not help, you can try to change IP adress and reduce the number of processes (e.g., setting `num_processes=1`).
191 | - If you get `[WARNING!] Blocked by CAPTCHA or robot check` no more than several times, it's not a big deal especially if you have many citing authors.
192 |
193 | 2. `An attempt has been made to start a new process before the current process has finished its bootstrapping phase.`
194 |
195 | - I believe this is because you did not protect your main function with `if __name__ == '__main__'`. You can take a look at the recommended script again.
196 | - If this still does not help, you might want to write your script slightly differently. Credit to [dk-liang](https://github.com/dk-liang) in [Issue #4](https://github.com/ChenLiu-1996/CitationMap/issues/4#issuecomment-2257572672).
197 | ```
198 | from citation_map import generate_citation_map
199 |
200 | def main():
201 | # This is my Google Scholar ID. Replace this with your ID.
202 | scholar_id = '3rDjnykAAAAJ'
203 | generate_citation_map(scholar_id)
204 |
205 | if __name__ == '__main__':
206 | import multiprocessing
207 | multiprocessing.freeze_support()
208 | main()
209 | ```
210 |
211 | ## Changelog
212 |
213 | Version 4.5 (Dec 6, 2024)
214 |
215 |
216 | 1. **Support manual edit of the csv.** Note you do need to manually correct the coordinates too if you choose to use this functionality. You can run `generate_citation_map` first, and get the csv and HTML. Then, you can manually modify the csv content, and generate an updated HTML by running `generate_citation_map` again with `parse_csv=True`.
217 | 2. Slight optimization on caching.
218 |
219 |
220 |
221 |
222 |
223 | Version 4.0 (Aug 2, 2024)
224 |
225 |
226 | 1. **Now you can trade off between precision and recall as we identify the affiliations.**
227 | 2. Added caching before identifying the affiliations, since users might want to try both the conservative and aggressive approaches.
228 | 3. Slight optimization in the affiliation to geocode stage by adding early breaking if successful.
229 |
230 |
231 |
232 |
233 |
234 | Version 3.11 (Jul 28, 2024)
235 |
236 |
237 | **Additional output CSV that records citation information.**
238 |
239 |
240 |
241 | Version 3.10 (Jul 28, 2024)
242 |
243 | In 3.10, I slightly improved the logic for affiliation extraction.
244 |
245 | In 3.8, I removed multiprocessing for `geopy.geocoders` as per their official documentation. Also I cleaned up some unnecessary `scholarly` calls which further helps us not get blacklisted by Google Scholar.
246 |
247 | In 3.7, I updated the logic for webscraping and avoided using `scholarly.citeby()` which is the biggest trigger of blacklisting from Google Scholar.
248 |
249 | **Now we should be able to handle users with more citations than before. I tested on a profile with 1000 citations without running into any issue.**
250 |
251 |
252 |
253 | Version 3.0 (Jul 27, 2024)
254 |
255 | I realized a problem with how I used `geopy.geocoders`. A majority of the authors' affiliations include details that are irrelevant to the affiliation itself. Therefore, they are not successfully found in the system and hence are not converted to geographic coordinates on the world map.
256 |
257 | For example, we would want the substring "Yale University" from the string "Assistant Professor at Yale University".
258 |
259 | I applied a simple fix with some rule-based natural language processing. This helps us identify many missing citing locations.
260 |
261 | **Please raise an issue or submit a pull request if you have some good idea to better process the affiliation string. Note that currently I am not considering any paid service or tools that pose extra burden on the users, such as GPT API.**
262 |
263 |
264 |
265 | Version 2.0 (Jul 27, 2024)
266 |
267 | I finally managed to **drastically speed up** the process using multiprocessing, in a way that avoids being blocked by Google Scholar.
268 |
269 | On my personal computer, processing my profile with 100 citations took 1 hour with version 1.0 while it's now taking 5 minutes with version 2.0.
270 |
271 | With that said, please be careful and do not run this tool frequently. I can easily get on Google Scholar's blacklist after a few runs.
272 |
273 |
274 |
275 | Version 1.0 (Jul 26, 2024)
276 |
277 | Very basic functionality.
278 |
279 | This script is a bit slow. On my personal computer, it takes half a minute to process each citation. If you have thousands of citations, it may or may not be a good idea to use this script.
280 |
281 | I tried to use multiprocessing, but unfortunately the excessive visits get me blocked by Google Scholar.
282 |
283 |
284 | ## Dependencies
285 | Dependencies (`scholarly`, `geopy`, `folium`, `tqdm`, `requests`, `bs4`, `pycountry`, `pandas`) are already taken care of when you install via pip.
286 |
287 | ## Acknowledgements
288 | This script was written under the assistance of ChatGPT-4o, but of course after intense debugging.
289 |
--------------------------------------------------------------------------------
/assets/citation_world_map.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ChenLiu-1996/CitationMap/67f58d53281ffba9fce431f24a4a40456a92fd61/assets/citation_world_map.png
--------------------------------------------------------------------------------
/citation_map/__init__.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2024 Chen Liu
2 | # All rights reserved.
3 | from .citation_map import generate_citation_map
4 | from .scholarly_support import get_citing_author_ids_and_citing_papers
5 |
6 | __all__ = ['generate_citation_map', 'get_citing_author_ids_and_citing_papers']
7 |
8 |
--------------------------------------------------------------------------------
/citation_map/citation_map.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2024 Chen Liu
2 | # All rights reserved.
3 | import folium
4 | import itertools
5 | import pandas as pd
6 | import os
7 | import pickle
8 | import pycountry
9 | import re
10 | import random
11 | import time
12 |
13 | from geopy.geocoders import Nominatim
14 | from multiprocessing import Pool
15 | from scholarly import scholarly, ProxyGenerator
16 | from tqdm import tqdm
17 | from typing import Any, List, Tuple, Optional
18 |
19 | from .scholarly_support import get_citing_author_ids_and_citing_papers, get_organization_name, NO_AUTHOR_FOUND_STR, KNOWN_AFFILIATION_DICT
20 |
21 |
22 | def find_all_citing_authors(scholar_id: str, num_processes: int = 16) -> List[Tuple[str]]:
23 | '''
24 | Step 1. Find all publications of the given Google Scholar ID.
25 | Step 2. Find all citing authors.
26 | '''
27 | # Find Google Scholar Profile using Scholar ID.
28 | author = scholarly.search_author_id(scholar_id)
29 | author = scholarly.fill(author, sections=['publications'])
30 | publications = author['publications']
31 | print('Author profile found, with %d publications.\n' % len(publications))
32 |
33 | # Fetch metadata for all publications.
34 | if num_processes > 1 and isinstance(num_processes, int):
35 | with Pool(processes=num_processes) as pool:
36 | all_publications = list(tqdm(pool.imap(__fill_publication_metadata, publications),
37 | desc='Filling metadata for your %d publications' % len(publications),
38 | total=len(publications)))
39 | else:
40 | all_publications = []
41 | for pub in tqdm(publications,
42 | desc='Filling metadata for your %d publications' % len(publications),
43 | total=len(publications)):
44 | all_publications.append(__fill_publication_metadata(pub))
45 |
46 | # Convert all publications to Google Scholar publication IDs and paper titles.
47 | # This is fast and no parallel processing is needed.
48 | all_publication_info = []
49 | for pub in all_publications:
50 | if 'cites_id' in pub:
51 | for cites_id in pub['cites_id']:
52 | pub_title = pub['bib']['title']
53 | all_publication_info.append((cites_id, pub_title))
54 |
55 | # Find all citing authors from all publications.
56 | if num_processes > 1 and isinstance(num_processes, int):
57 | with Pool(processes=num_processes) as pool:
58 | all_citing_author_paper_info_nested = list(tqdm(pool.imap(__citing_authors_and_papers_from_publication, all_publication_info),
59 | desc='Finding citing authors and papers on your %d publications' % len(all_publication_info),
60 | total=len(all_publication_info)))
61 | else:
62 | all_citing_author_paper_info_nested = []
63 | for pub in tqdm(all_publication_info,
64 | desc='Finding citing authors and papers on your %d publications' % len(all_publication_info),
65 | total=len(all_publication_info)):
66 | all_citing_author_paper_info_nested.append(__citing_authors_and_papers_from_publication(pub))
67 | all_citing_author_paper_tuple_list = list(itertools.chain(*all_citing_author_paper_info_nested))
68 | return all_citing_author_paper_tuple_list
69 |
70 | def find_all_citing_affiliations(all_citing_author_paper_tuple_list: List[Tuple[str]],
71 | num_processes: int = 16,
72 | affiliation_conservative: bool = False):
73 | '''
74 | Step 3. Find all citing affiliations.
75 | '''
76 | if affiliation_conservative:
77 | __affiliations_from_authors = __affiliations_from_authors_conservative
78 | else:
79 | __affiliations_from_authors = __affiliations_from_authors_aggressive
80 |
81 | # Find all citing insitutions from all citing authors.
82 | if num_processes > 1 and isinstance(num_processes, int):
83 | with Pool(processes=num_processes) as pool:
84 | author_paper_affiliation_tuple_list = list(tqdm(pool.imap(__affiliations_from_authors, all_citing_author_paper_tuple_list),
85 | desc='Finding citing affiliations from %d citing authors' % len(all_citing_author_paper_tuple_list),
86 | total=len(all_citing_author_paper_tuple_list)))
87 | else:
88 | author_paper_affiliation_tuple_list = []
89 | for author_and_paper in tqdm(all_citing_author_paper_tuple_list,
90 | desc='Finding citing affiliations from %d citing authors' % len(all_citing_author_paper_tuple_list),
91 | total=len(all_citing_author_paper_tuple_list)):
92 | author_paper_affiliation_tuple_list.append(__affiliations_from_authors(author_and_paper))
93 |
94 | # Filter empty items.
95 | author_paper_affiliation_tuple_list = [item for item in author_paper_affiliation_tuple_list if item]
96 | return author_paper_affiliation_tuple_list
97 |
98 | def clean_affiliation_names(author_paper_affiliation_tuple_list: List[Tuple[str]]) -> List[Tuple[str]]:
99 | '''
100 | Optional Step. Clean up the names of affiliations from the authors' affiliation tab on their Google Scholar profiles.
101 | NOTE: This logic is very naive. Please send an issue or pull request if you have any idea how to improve it.
102 | Currently we will not consider any paid service or tools that pose extra burden on the users, such as GPT API.
103 | '''
104 | cleaned_author_paper_affiliation_tuple_list = []
105 | for author_name, citing_paper_title, cited_paper_title, affiliation_string in author_paper_affiliation_tuple_list:
106 | if author_name == NO_AUTHOR_FOUND_STR:
107 | cleaned_author_paper_affiliation_tuple_list.append((NO_AUTHOR_FOUND_STR, citing_paper_title, cited_paper_title, NO_AUTHOR_FOUND_STR))
108 | else:
109 | # Use a regular expression to split the string by ';' or 'and'.
110 | substring_list = [part.strip() for part in re.split(r'[;]|\band\b', affiliation_string)]
111 | # Further split the substrings by ',' if the latter component is not a country.
112 | substring_list = __country_aware_comma_split(substring_list)
113 |
114 | for substring in substring_list:
115 | # Use a regular expression to remove anything before 'at', or '@'.
116 | cleaned_affiliation = re.sub(r'.*?\bat\b|.*?@', '', substring, flags=re.IGNORECASE).strip()
117 | # Use a regular expression to filter out strings that represent
118 | # a person's identity rather than affiliation.
119 | is_common_identity_string = re.search(
120 | re.compile(
121 | r'\b(director|manager|chair|engineer|programmer|scientist|professor|lecturer|phd|ph\.d|postdoc|doctor|student|department of)\b',
122 | re.IGNORECASE),
123 | cleaned_affiliation)
124 | if not is_common_identity_string:
125 | cleaned_author_paper_affiliation_tuple_list.append((author_name, citing_paper_title, cited_paper_title, cleaned_affiliation))
126 | return cleaned_author_paper_affiliation_tuple_list
127 |
128 | def fill_known_affiliations(affiliation_name: str) -> Optional[str]:
129 | '''
130 | If the affiliation is known, return its geolocation.
131 | If not, return None.
132 | The reason we have this function is taht geolocator may return hilarious results,
133 | such as putting the company Amazon in Amazon rain forest.
134 | NOTE: This is a temporal fix. Can be replaced by smarter natural language parsers.
135 | '''
136 | for key in KNOWN_AFFILIATION_DICT:
137 | if key in affiliation_name.lower():
138 | return KNOWN_AFFILIATION_DICT[key]
139 | return None
140 |
141 | def affiliation_invalid(affiliation_name: str) -> bool:
142 | '''
143 | Check if the affiliation is invalid.
144 | Typical invalid affiliation contains non-affiliation words, such as 'computer science'.
145 | Invalid affiliations will waste time in geolocator.geocode(affiliation_name).
146 | NOTE: This is a temporal fix. Can be replaced by smarter natural language parsers.
147 | '''
148 | invalid_affiliation_set = {
149 | NO_AUTHOR_FOUND_STR.lower(),
150 | 'computer', 'computer science', 'electrical', 'engineering', 'researcher',
151 | 'scholar', 'inc.', 'school', 'department', 'student', 'candidate', 'professor', 'faculty', 'associate'
152 | }
153 | for key in invalid_affiliation_set:
154 | if key in affiliation_name.lower():
155 | return True
156 | return False
157 |
158 | def affiliation_text_to_geocode(author_paper_affiliation_tuple_list: List[Tuple[str]], max_attempts: int = 3) -> List[Tuple[str]]:
159 | '''
160 | Step 4: Convert affiliations in plain text to Geocode.
161 | '''
162 | coordinates_and_info = []
163 | # NOTE: According to the Nominatim Usage Policy (https://operations.osmfoundation.org/policies/nominatim/),
164 | # we are explicitly asked not to submit bulk requests on multiple threads.
165 | # Therefore, we will keep it to a loop instead of multiprocessing.
166 | geolocator = Nominatim(user_agent='citation_mapper')
167 |
168 | # Find unique affiliations and record their corresponding entries.
169 | affiliation_map = {}
170 | for entry_idx, (_, _, _, affiliation_name) in enumerate(author_paper_affiliation_tuple_list):
171 | if affiliation_name not in affiliation_map.keys():
172 | affiliation_map[affiliation_name] = [entry_idx]
173 | else:
174 | affiliation_map[affiliation_name].append(entry_idx)
175 |
176 | num_total_affiliations = len(affiliation_map)
177 | num_located_affiliations = 0
178 | for affiliation_name in tqdm(affiliation_map,
179 | desc='Finding geographic coordinates from %d unique citing affiliations in %d entries' % (
180 | len(affiliation_map), len(author_paper_affiliation_tuple_list)),
181 | total=len(affiliation_map)):
182 | if affiliation_invalid(affiliation_name):
183 | # If an affiliation is invalid, we will not run geolocator on it.
184 | # However, we still record it in the csv, so that the user can choose to manually correct it.
185 | corresponding_entries = affiliation_map[affiliation_name]
186 | for entry_idx in corresponding_entries:
187 | author_name, citing_paper_title, cited_paper_title, affiliation_name = author_paper_affiliation_tuple_list[entry_idx]
188 | coordinates_and_info.append((author_name, citing_paper_title, cited_paper_title, affiliation_name,
189 | '', '', '', '', '', ''))
190 | else:
191 | # Directly enter information if the affiliation is known.
192 | geo_location = fill_known_affiliations(affiliation_name)
193 | if geo_location is not None:
194 | county, city, state, country, latitude, longitude = geo_location
195 | corresponding_entries = affiliation_map[affiliation_name]
196 | for entry_idx in corresponding_entries:
197 | author_name, citing_paper_title, cited_paper_title, affiliation_name = author_paper_affiliation_tuple_list[entry_idx]
198 | coordinates_and_info.append((author_name, citing_paper_title, cited_paper_title, affiliation_name,
199 | latitude, longitude, county, city, state, country))
200 | # This location is successfully recorded.
201 | num_located_affiliations += 1
202 | else:
203 | for _ in range(max_attempts):
204 | try:
205 | geo_location = geolocator.geocode(affiliation_name)
206 | if geo_location is not None:
207 | # Get the full location metadata that includes county, city, state, country, etc.
208 | location_metadata = geolocator.reverse(str(geo_location.latitude) + ',' + str(geo_location.longitude), language='en')
209 | address = location_metadata.raw['address']
210 | county, city, state, country = None, None, None, None
211 | if 'county' in address:
212 | county = address['county']
213 | if 'city' in address:
214 | city = address['city']
215 | if 'state' in address:
216 | state = address['state']
217 | if 'country' in address:
218 | country = address['country']
219 |
220 | corresponding_entries = affiliation_map[affiliation_name]
221 | for entry_idx in corresponding_entries:
222 | author_name, citing_paper_title, cited_paper_title, affiliation_name = author_paper_affiliation_tuple_list[entry_idx]
223 | coordinates_and_info.append((author_name, citing_paper_title, cited_paper_title, affiliation_name,
224 | geo_location.latitude, geo_location.longitude,
225 | county, city, state, country))
226 | # This location is successfully recorded.
227 | num_located_affiliations += 1
228 | break
229 | except:
230 | continue
231 | print('\nConverted %d/%d affiliations to Geocodes.' % (num_located_affiliations, num_total_affiliations))
232 | coordinates_and_info = [item for item in coordinates_and_info if item is not None] # Filter out empty entries.
233 | return coordinates_and_info
234 |
235 | def export_dict_to_csv(coordinates_and_info: List[Tuple[str]], csv_output_path: str) -> None:
236 | '''
237 | Step 5.1: Export csv file recording citation information.
238 | '''
239 |
240 | citation_df = pd.DataFrame(coordinates_and_info,
241 | columns=['citing author name', 'citing paper title', 'cited paper title',
242 | 'affiliation', 'latitude', 'longitude',
243 | 'county', 'city', 'state', 'country'])
244 |
245 | citation_df.to_csv(csv_output_path)
246 | return
247 |
248 | def read_csv_to_dict(csv_path: str) -> None:
249 | '''
250 | Step 5.1: Read csv file recording citation information.
251 | Only relevant if `read_from_csv` is True.
252 | '''
253 |
254 | citation_df = pd.read_csv(csv_path, index_col=0)
255 | coordinates_and_info = list(citation_df.itertuples(index=False, name=None))
256 | return coordinates_and_info
257 |
258 | def create_map(coordinates_and_info: List[Tuple[str]], pin_colorful: bool = True):
259 | '''
260 | Step 5.2: Create the Citation World Map.
261 |
262 | For authors under the same affiliations, they will be displayed in the same pin.
263 | '''
264 | citation_map = folium.Map(location=[20, 0], zoom_start=2)
265 |
266 | # Find unique affiliations and record their corresponding entries.
267 | affiliation_map = {}
268 | for entry_idx, (_, _, _, affiliation_name, _, _, _, _, _, _) in enumerate(coordinates_and_info):
269 | if affiliation_name == NO_AUTHOR_FOUND_STR:
270 | continue
271 | elif affiliation_name not in affiliation_map.keys():
272 | affiliation_map[affiliation_name] = [entry_idx]
273 | else:
274 | affiliation_map[affiliation_name].append(entry_idx)
275 |
276 | if pin_colorful:
277 | colors = ['red', 'blue', 'green', 'purple', 'orange', 'darkred',
278 | 'lightred', 'beige', 'darkblue', 'darkgreen', 'cadetblue',
279 | 'darkpurple', 'pink', 'lightblue', 'lightgreen',
280 | 'gray', 'black', 'lightgray']
281 | for affiliation_name in affiliation_map:
282 | color = random.choice(colors)
283 | corresponding_entries = affiliation_map[affiliation_name]
284 | author_name_list = []
285 | location_valid = True
286 | for entry_idx in corresponding_entries:
287 | author_name, _, _, _, lat, lon, _, _, _, _ = coordinates_and_info[entry_idx]
288 | if pd.isna(lat) or pd.isna(lon) or lat == '' or lon == '':
289 | location_valid = False
290 | author_name_list.append(author_name)
291 | if location_valid:
292 | folium.Marker([lat, lon], popup='%s (%s)' % (affiliation_name, ' & '.join(author_name_list)),
293 | icon=folium.Icon(color=color)).add_to(citation_map)
294 | else:
295 | for affiliation_name in affiliation_map:
296 | corresponding_entries = affiliation_map[affiliation_name]
297 | author_name_list = []
298 | location_valid = True
299 | for entry_idx in corresponding_entries:
300 | if pd.isna(lat) or pd.isna(lon) or lat == '' or lon == '':
301 | location_valid = False
302 | author_name, _, _, _, lat, lon, _, _, _, _ = coordinates_and_info[entry_idx]
303 | author_name_list.append(author_name)
304 | if location_valid:
305 | folium.Marker([lat, lon], popup='%s (%s)' % (affiliation_name, ' & '.join(author_name_list))).add_to(citation_map)
306 | return citation_map
307 |
308 | def count_citation_stats(coordinates_and_info: List[Tuple[str]]) -> List[int]:
309 | '''
310 | Count the number of citing authors, affiliations and countries.
311 | '''
312 | unique_author_list, unique_affiliation_list, unique_country_list = set(), set(), set()
313 | for (author_name, _, _, affiliation_name, _, _, _, _, _, country) in coordinates_and_info:
314 | if affiliation_name == NO_AUTHOR_FOUND_STR:
315 | continue
316 | unique_author_list.add(author_name)
317 | unique_affiliation_list.add(affiliation_name)
318 | unique_country_list.add(country)
319 | num_authors, num_affiliations, num_countries = \
320 | len(unique_author_list), len(unique_affiliation_list), len(unique_country_list)
321 | return num_authors, num_affiliations, num_countries
322 |
323 | def __fill_publication_metadata(pub):
324 | time.sleep(random.uniform(1, 5)) # Random delay to reduce risk of being blocked.
325 | return scholarly.fill(pub)
326 |
327 | def __citing_authors_and_papers_from_publication(cites_id_and_cited_paper: Tuple[str, str]):
328 | cites_id, cited_paper_title = cites_id_and_cited_paper
329 | citing_paper_search_url = 'https://scholar.google.com/scholar?hl=en&cites=' + cites_id
330 | citing_authors_and_citing_papers = get_citing_author_ids_and_citing_papers(citing_paper_search_url)
331 | citing_author_paper_info = []
332 | for citing_author_id, citing_paper_title in citing_authors_and_citing_papers:
333 | citing_author_paper_info.append((citing_author_id, citing_paper_title, cited_paper_title))
334 | return citing_author_paper_info
335 |
336 | def __affiliations_from_authors_conservative(citing_author_paper_info: str):
337 | '''
338 | Conservative: only use Google Scholar verified organization.
339 | This will have higher precision and lower recall.
340 | '''
341 | citing_author_id, citing_paper_title, cited_paper_title = citing_author_paper_info
342 | if citing_author_id == NO_AUTHOR_FOUND_STR:
343 | return (NO_AUTHOR_FOUND_STR, citing_paper_title, cited_paper_title, NO_AUTHOR_FOUND_STR)
344 |
345 | time.sleep(random.uniform(1, 5)) # Random delay to reduce risk of being blocked.
346 | citing_author = scholarly.search_author_id(citing_author_id)
347 |
348 | if 'organization' in citing_author:
349 | try:
350 | author_organization = get_organization_name(citing_author['organization'])
351 | return (citing_author['name'], citing_paper_title, cited_paper_title, author_organization)
352 | except Exception as e:
353 | print('[Warning!]', e)
354 | return None
355 | return None
356 |
357 | def __affiliations_from_authors_aggressive(citing_author_paper_info: str):
358 | '''
359 | Aggressive: use the self-reported affiliation string from the Google Scholar affiliation panel.
360 | This will have lower precision and higher recall.
361 | '''
362 | citing_author_id, citing_paper_title, cited_paper_title = citing_author_paper_info
363 | if citing_author_id == NO_AUTHOR_FOUND_STR:
364 | return (NO_AUTHOR_FOUND_STR, citing_paper_title, cited_paper_title, NO_AUTHOR_FOUND_STR)
365 |
366 | time.sleep(random.uniform(1, 5)) # Random delay to reduce risk of being blocked.
367 | citing_author = scholarly.search_author_id(citing_author_id)
368 | if 'affiliation' in citing_author:
369 | return (citing_author['name'], citing_paper_title, cited_paper_title, citing_author['affiliation'])
370 | return None
371 |
372 | def __country_aware_comma_split(string_list: List[str]) -> List[str]:
373 | comma_split_list = []
374 |
375 | for part in string_list:
376 | # Split the strings by comma.
377 | # NOTE: The non-English comma is entered intentionally.
378 | sub_parts = [sub_part.strip() for sub_part in re.split(r'[,,]', part)]
379 | sub_parts_iter = iter(sub_parts)
380 |
381 | # Merge the split strings if the latter component is a country name.
382 | for sub_part in sub_parts_iter:
383 | if __iscountry(sub_part):
384 | continue # Skip country names if they appear as the first sub_part.
385 | next_part = next(sub_parts_iter, None)
386 | if __iscountry(next_part):
387 | comma_split_list.append(f"{sub_part}, {next_part}")
388 | else:
389 | comma_split_list.append(sub_part)
390 | if next_part:
391 | comma_split_list.append(next_part)
392 | return comma_split_list
393 |
394 | def __iscountry(string: str) -> bool:
395 | try:
396 | pycountry.countries.lookup(string)
397 | return True
398 | except LookupError:
399 | return False
400 |
401 | def __print_author_and_affiliation(author_paper_affiliation_tuple_list: List[Tuple[str]]) -> None:
402 | __author_affiliation_tuple_list = []
403 | for author_name, _, _, affiliation_name in sorted(author_paper_affiliation_tuple_list):
404 | if author_name == NO_AUTHOR_FOUND_STR:
405 | continue
406 | __author_affiliation_tuple_list.append((author_name, affiliation_name))
407 |
408 | # Take unique tuples.
409 | __author_affiliation_tuple_list = list(set(__author_affiliation_tuple_list))
410 | for author_name, affiliation_name in sorted(__author_affiliation_tuple_list):
411 | print('Author: %s. Affiliation: %s.' % (author_name, affiliation_name))
412 | print('')
413 | return
414 |
415 |
416 | def save_cache(data: Any, fpath: str) -> None:
417 | os.makedirs(os.path.dirname(fpath), exist_ok=True)
418 | with open(fpath, "wb") as fd:
419 | pickle.dump(data, fd)
420 |
421 | def load_cache(fpath: str) -> Any:
422 | with open(fpath, "rb") as fd:
423 | return pickle.load(fd)
424 |
425 | def generate_citation_map(scholar_id: str,
426 | output_path: str = 'citation_map.html',
427 | csv_output_path: str = 'citation_info.csv',
428 | parse_csv: bool = False,
429 | cache_folder: str = 'cache',
430 | affiliation_conservative: bool = False,
431 | num_processes: int = 16,
432 | use_proxy: bool = False,
433 | pin_colorful: bool = True,
434 | print_citing_affiliations: bool = True):
435 | '''
436 | Google Scholar Citation World Map.
437 |
438 | Parameters
439 | ----
440 | scholar_id: str
441 | Your Google Scholar ID.
442 | output_path: str
443 | (default is 'citation_map.html')
444 | The path to the output HTML file.
445 | csv_output_path: str
446 | (default is 'citation_info.csv')
447 | The path to the output csv file.
448 | parse_csv: bool
449 | (default is False)
450 | If True, will directly jump to Step 5.2, using the information loaded from the csv.
451 | cache_folder: str
452 | (default is 'cache')
453 | The folder to save intermediate results, after finding (author, paper) but before finding the affiliations.
454 | This is because the user might want to try the aggressive vs. conservative approach.
455 | Set to None if you do not want caching.
456 | affiliation_conservative: bool
457 | (default is False)
458 | If true, we will use a more conservative approach to identify affiliations.
459 | If false, we will use a more aggressive approach to identify affiliations.
460 | num_processes: int
461 | (default is 16)
462 | Number of processes for parallel processing.
463 | use_proxy: bool
464 | (default is False)
465 | If true, we will use a scholarly proxy.
466 | It is necessary for some environments to avoid blocks, but it usually makes things slower.
467 | pin_colorful: bool
468 | (default is True)
469 | If true, the location pins will have a variety of colors.
470 | Otherwise, it will only have one color.
471 | print_citing_affiliations: bool
472 | (default is True)
473 | If true, print the list of citing affiliations (affiliations of citing authors).
474 | '''
475 |
476 | if not parse_csv:
477 |
478 | if use_proxy:
479 | pg = ProxyGenerator()
480 | pg.FreeProxies()
481 | scholarly.use_proxy(pg)
482 | print('Using proxy.')
483 |
484 | if cache_folder is not None:
485 | cache_path = os.path.join(cache_folder, scholar_id, 'all_citing_author_paper_tuple_list.pkl')
486 | else:
487 | cache_path = None
488 |
489 | if cache_path is None or not os.path.exists(cache_path):
490 | print('No cache found for this author. Finding citing authors from scratch.\n')
491 |
492 | # NOTE: Step 1. Find all publications of the given Google Scholar ID.
493 | # Step 2. Find all citing authors.
494 | all_citing_author_paper_tuple_list = find_all_citing_authors(scholar_id=scholar_id,
495 | num_processes=num_processes)
496 | print('A total of %d citing authors recorded.\n' % len(all_citing_author_paper_tuple_list))
497 | if cache_path is not None and len(all_citing_author_paper_tuple_list) > 0:
498 | save_cache(all_citing_author_paper_tuple_list, cache_path)
499 | print('Saved to cache: %s.\n' % cache_path)
500 |
501 | else:
502 | print('Cache found. Loading author paper information from cache.\n')
503 | all_citing_author_paper_tuple_list = load_cache(cache_path)
504 | print('Loaded from cache: %s.\n' % cache_path)
505 | print('A total of %d citing authors loaded.\n' % len(all_citing_author_paper_tuple_list))
506 |
507 | if cache_folder is not None:
508 | cache_path = os.path.join(cache_folder, scholar_id, 'author_paper_affiliation_tuple_list.pkl')
509 | else:
510 | cache_path = None
511 |
512 | if cache_path is None or not os.path.exists(cache_path):
513 | print('No cache found for this author. Finding citing affiliations from scratch.\n')
514 |
515 | # NOTE: Step 2. Find all citing affiliations.
516 | print('Identifying affiliations using the %s approach.' % ('conservative' if affiliation_conservative else 'aggressive'))
517 | author_paper_affiliation_tuple_list = find_all_citing_affiliations(all_citing_author_paper_tuple_list,
518 | num_processes=num_processes,
519 | affiliation_conservative=affiliation_conservative)
520 | print('\nA total of %d citing affiliations recorded.\n' % len(author_paper_affiliation_tuple_list))
521 | # Take unique tuples.
522 | author_paper_affiliation_tuple_list = list(set(author_paper_affiliation_tuple_list))
523 |
524 | # NOTE: Step 3. Clean the affiliation strings (optional, only used if taking the aggressive approach).
525 | if print_citing_affiliations:
526 | if affiliation_conservative:
527 | print('Taking the conservative approach. Will not need to clean the affiliation names.')
528 | print('List of all citing authors and affiliations:\n')
529 | else:
530 | print('Taking the aggressive approach. Cleaning the affiliation names.')
531 | print('List of all citing authors and affiliations before cleaning:\n')
532 | __print_author_and_affiliation(author_paper_affiliation_tuple_list)
533 | if not affiliation_conservative:
534 | cleaned_author_paper_affiliation_tuple_list = clean_affiliation_names(author_paper_affiliation_tuple_list)
535 | if print_citing_affiliations:
536 | print('List of all citing authors and affiliations after cleaning:\n')
537 | __print_author_and_affiliation(cleaned_author_paper_affiliation_tuple_list)
538 | # Use the merged set to maximize coverage.
539 | author_paper_affiliation_tuple_list += cleaned_author_paper_affiliation_tuple_list
540 | # Take unique tuples.
541 | author_paper_affiliation_tuple_list = list(set(author_paper_affiliation_tuple_list))
542 |
543 | if cache_path is not None and len(author_paper_affiliation_tuple_list) > 0:
544 | save_cache(author_paper_affiliation_tuple_list, cache_path)
545 | print('Saved to cache: %s.\n' % cache_path)
546 |
547 | else:
548 | print('Cache found. Loading author paper and affiliation information from cache.\n')
549 | author_paper_affiliation_tuple_list = load_cache(cache_path)
550 | print('List of all citing authors and affiliations loaded:\n')
551 | __print_author_and_affiliation(author_paper_affiliation_tuple_list)
552 |
553 | # NOTE: Step 4. Convert affiliations in plain text to Geocode.
554 | coordinates_and_info = affiliation_text_to_geocode(author_paper_affiliation_tuple_list)
555 | # Take unique tuples.
556 | coordinates_and_info = sorted(list(set(coordinates_and_info)))
557 |
558 | # NOTE: Step 5.1. Export csv file recording citation information.
559 | export_dict_to_csv(coordinates_and_info, csv_output_path)
560 | print('\nCitation information exported to %s.' % csv_output_path)
561 |
562 | else:
563 | print('\nDirectly parsing the csv. Skipping all previous steps.')
564 | assert os.path.isfile(csv_output_path), '`csv_output_path` is not a file.'
565 | coordinates_and_info = read_csv_to_dict(csv_output_path)
566 | print('\nCitation information loaded from %s.' % csv_output_path)
567 |
568 | # NOTE: Step 5.2. Create the citation world map.
569 | citation_map = create_map(coordinates_and_info, pin_colorful=pin_colorful)
570 | citation_map.save(output_path)
571 | print('\nHTML map created and saved at %s.\n' % output_path)
572 |
573 | num_authors, num_affiliations, num_countries = count_citation_stats(coordinates_and_info)
574 | print('\nYou have been cited by %s researchers from %s affiliations and %s countries.\n' % (
575 | num_authors, num_affiliations, num_countries))
576 | return
577 |
578 |
579 | if __name__ == '__main__':
580 | # Replace this with your Google Scholar ID.
581 | scholar_id = '3rDjnykAAAAJ'
582 | generate_citation_map(scholar_id, output_path='citation_map.html',
583 | csv_output_path='citation_info.csv',
584 | parse_csv=False,
585 | cache_folder='cache', affiliation_conservative=True, num_processes=16,
586 | use_proxy=False, pin_colorful=True, print_citing_affiliations=True)
587 |
--------------------------------------------------------------------------------
/citation_map/scholarly_support.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2024 Chen Liu
2 | # All rights reserved.
3 | import random
4 | import requests
5 | import time
6 | from bs4 import BeautifulSoup
7 | from typing import List
8 |
9 | NO_AUTHOR_FOUND_STR = 'No_author_found'
10 |
11 | # Observation: the Nominatim package is very bad at getting the geolocation of companies (geolocation of universities are fine).
12 | # Temporary solution: hard code the geolocations of the companies.
13 | # NOTE: The headquarter represents the whole company which usually has many offices accross the world.
14 | KNOWN_AFFILIATION_DICT = {
15 | 'amazon': ('King County', 'Seattle', 'Washington', 'USA', 47.622721, -122.337176),
16 | 'meta': ('Menlo Park', 'San Mateo', 'California', 'USA', 37.4851, -122.1483),
17 | 'microsoft': ('King County', 'Redmond', 'Washington', 'USA', 47.645695, -122.131803),
18 | 'ibm': ('Westchester', 'Armonk', 'New York', 'USA', 41.108252, -73.719887),
19 | 'google': ('Santa Clara', 'Mountain View', 'California', 'USA', 37.421473, -122.080679),
20 | 'morgan stanley': ('New York', 'New York', 'New York', 'USA', 40.760251, -73.98518),
21 | 'siemens healthineers': ('Forchheim', 'Forchheim', 'Bavaria', 'Germany', 49.702088, 11.055870),
22 | 'oracle': ('Travis', 'Austin', 'Texas', 'USA', 30.242913, -97.721641)
23 | }
24 |
25 | def get_html_per_citation_page(soup) -> List[str]:
26 | '''
27 | Utility to query each page containing results for
28 | cited work.
29 | Parameters
30 | --------
31 | soup: Beautiful Soup object pointing to current page.
32 | '''
33 | citing_authors_and_citing_papers = []
34 |
35 | for result in soup.find_all('div', class_='gs_ri'):
36 | title_tag = result.find('h3', class_='gs_rt')
37 | if title_tag:
38 | paper_parsed = False
39 | author_links = result.find_all('a', href=True)
40 | title_text = title_tag.get_text()
41 | title = title_text.replace('[HTML]', '').replace('[PDF]', '')
42 | for link in author_links:
43 | if 'user=' in link['href']:
44 | author_id = link['href'].split('user=')[1].split('&')[0]
45 | citing_authors_and_citing_papers.append((author_id, title))
46 | paper_parsed = True
47 | if not paper_parsed:
48 | print("[WARNING!] Could not find author links for ", title)
49 | citing_authors_and_citing_papers.append((NO_AUTHOR_FOUND_STR, title))
50 | else:
51 | continue
52 | return citing_authors_and_citing_papers
53 |
54 |
55 | def get_citing_author_ids_and_citing_papers(paper_url: str) -> List[str]:
56 | '''
57 | Find the (Google Scholar IDs of authors, titles of papers) who cite a given paper on Google Scholar.
58 |
59 | Parameters
60 | --------
61 | paper_url: URL of the paper BEING cited.
62 | '''
63 | citing_authors_and_citing_papers = []
64 |
65 | headers = requests.utils.default_headers()
66 | headers.update({
67 | 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0',
68 | })
69 |
70 | time.sleep(random.uniform(1, 5)) # Random delay to reduce risk of being blocked.
71 |
72 | # Search the url of all citing papers.
73 | response = requests.get(paper_url, headers=headers)
74 | if response.status_code != 200:
75 | raise Exception('Failed to fetch the Google Scholar page')
76 |
77 | # Get the HTML data.
78 | soup = BeautifulSoup(response.text, 'html.parser')
79 |
80 | # Check for common indicators of blocking
81 | if 'CAPTCHA' in soup.text or 'not a robot' in soup.text:
82 | print('[WARNING!] Blocked by CAPTCHA or robot check when searching %s.' % paper_url)
83 | return []
84 |
85 | if 'Access Denied' in soup.text or 'Forbidden' in soup.text:
86 | print('[WARNING!] Access denied or forbidden when searching searching %s.' % paper_url)
87 | return []
88 |
89 | # Loop through the citation results and find citing authors and papers.
90 | current_page_number = 1
91 | citing_authors_and_citing_papers += get_html_per_citation_page(soup)
92 |
93 | # Find the page navigation.
94 | navigation_buttons = soup.find_all('a', class_='gs_nma')
95 | for navigation in navigation_buttons:
96 | page_number_str = navigation.text
97 | if page_number_str and page_number_str.isnumeric() and int(page_number_str) == current_page_number + 1:
98 | # Found the correct button for next page.
99 | current_page_number += 1
100 | next_url = 'https://scholar.google.com' + navigation['href']
101 | time.sleep(random.uniform(1, 5)) # Random delay to reduce risk of being blocked.
102 |
103 | response = requests.get(next_url, headers=headers)
104 | if response.status_code != 200:
105 | break
106 | soup = BeautifulSoup(response.text, 'html.parser')
107 | citing_authors_and_citing_papers += get_html_per_citation_page(soup)
108 | else:
109 | continue
110 |
111 | return citing_authors_and_citing_papers
112 |
113 | def get_organization_name(organization_id: str) -> str:
114 | '''
115 | Get the official name of the organization defined by the unique Google Scholar organization ID.
116 | '''
117 |
118 | headers = requests.utils.default_headers()
119 | headers.update({
120 | 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0',
121 | })
122 |
123 | url = f'https://scholar.google.com/citations?view_op=view_org&org={organization_id}&hl=en'
124 |
125 | time.sleep(random.uniform(1, 5)) # Random delay to reduce risk of being blocked.
126 |
127 | response = requests.get(url, headers=headers)
128 |
129 | if response.status_code != 200:
130 | raise Exception(f'When getting organization name, failed to fetch {url}: {response.text}.')
131 |
132 | soup = BeautifulSoup(response.text, 'html.parser')
133 | tag = soup.find('h2', {'class': 'gsc_authors_header'})
134 | if not tag:
135 | raise Exception(f'When getting organization name, failed to parse {url}.')
136 | return tag.text.replace('Learn more', '').strip()
137 |
--------------------------------------------------------------------------------
/demo/demo.py:
--------------------------------------------------------------------------------
1 | from citation_map import generate_citation_map
2 |
3 | if __name__ == '__main__':
4 | # This is my Google Scholar ID. Replace this with your ID.
5 | scholar_id = '3rDjnykAAAAJ'
6 | generate_citation_map(scholar_id, output_path='citation_map.html',
7 | cache_folder='cache', parse_csv=False, affiliation_conservative=False, num_processes=16,
8 | use_proxy=False, pin_colorful=True, print_citing_affiliations=True)
9 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup
2 |
3 | with open('README.md') as f:
4 | long_description = f.read()
5 |
6 | setup(
7 | name='citation-map',
8 | version='4.11',
9 | license='CC BY-NC-SA',
10 | author='Chen Liu',
11 | author_email='chen.liu.cl2482@yale.edu',
12 | packages={'citation_map'},
13 | # package_dir={'': ''},
14 | description='A simple tool to generate your Google Scholar citation world map.',
15 | long_description=long_description,
16 | long_description_content_type='text/markdown',
17 | url='https://github.com/ChenLiu-1996/CitationMap',
18 | keywords='citation map, citation world map, google scholar',
19 | install_requires=['scholarly', 'geopy', 'folium', 'tqdm', 'requests', 'bs4', 'pycountry', 'pandas'],
20 | classifiers=[
21 | 'Development Status :: 3 - Alpha', # Chose either "3 - Alpha", "4 - Beta" or "5 - Production/Stable" as the current state of your package
22 | 'Intended Audience :: Developers', # Define that your audience are developers
23 | 'License :: Other/Proprietary License', # Again, pick a license
24 | 'Programming Language :: Python :: 3', #Specify which pyhton versions that you want to support
25 | ],
26 | )
--------------------------------------------------------------------------------