├── .gitignore
├── generators
├── resources
│ ├── SFNT-CFF.otf
│ ├── SFNT-TTF.ttf
│ ├── SFNT-CFF-Fallback.otf
│ ├── SFNT-CFF-Reference.otf
│ ├── SFNT-TTF-Composite.ttf
│ ├── SFNT-TTF-Fallback.ttf
│ ├── SFNT-TTF-Reference.ttf
│ ├── test-fonts.css
│ ├── notes.txt
│ ├── index.css
│ ├── SFNT-CFF.ttx
│ ├── SFNT-CFF-Fallback.ttx
│ ├── SFNT-CFF-Reference.ttx
│ └── SFNT-TTF.ttx
├── testCaseGeneratorLib
│ ├── __init__.py
│ ├── paths.py
│ ├── utilities.py
│ ├── sfnt.py
│ ├── defaultData.py
│ ├── woff.py
│ └── html.py
├── DecoderTestCaseGenerator.py
└── AuthoringToolTestCaseGenerator.py
├── w3c.json
└── README.md
/.gitignore:
--------------------------------------------------------------------------------
1 | *.pyc
2 | .*.sw[nop]
3 | /AuthoringTool/
4 | /Decoder/
5 | /Format/
6 | /UserAgent/
7 |
--------------------------------------------------------------------------------
/generators/resources/SFNT-CFF.otf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/w3c/woff2-tests/HEAD/generators/resources/SFNT-CFF.otf
--------------------------------------------------------------------------------
/generators/resources/SFNT-TTF.ttf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/w3c/woff2-tests/HEAD/generators/resources/SFNT-TTF.ttf
--------------------------------------------------------------------------------
/w3c.json:
--------------------------------------------------------------------------------
1 | {
2 | "group": ["44556"]
3 | , "contacts": ["svgeesus"]
4 | , "repo-type": "tests"
5 | }
6 |
--------------------------------------------------------------------------------
/generators/resources/SFNT-CFF-Fallback.otf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/w3c/woff2-tests/HEAD/generators/resources/SFNT-CFF-Fallback.otf
--------------------------------------------------------------------------------
/generators/resources/SFNT-CFF-Reference.otf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/w3c/woff2-tests/HEAD/generators/resources/SFNT-CFF-Reference.otf
--------------------------------------------------------------------------------
/generators/resources/SFNT-TTF-Composite.ttf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/w3c/woff2-tests/HEAD/generators/resources/SFNT-TTF-Composite.ttf
--------------------------------------------------------------------------------
/generators/resources/SFNT-TTF-Fallback.ttf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/w3c/woff2-tests/HEAD/generators/resources/SFNT-TTF-Fallback.ttf
--------------------------------------------------------------------------------
/generators/resources/SFNT-TTF-Reference.ttf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/w3c/woff2-tests/HEAD/generators/resources/SFNT-TTF-Reference.ttf
--------------------------------------------------------------------------------
/generators/testCaseGeneratorLib/__init__.py:
--------------------------------------------------------------------------------
1 | """
2 | This package contains the infrastructure required for generating the various test suites.
3 | """
--------------------------------------------------------------------------------
/generators/resources/test-fonts.css:
--------------------------------------------------------------------------------
1 | @font-face {
2 | font-family: "WOFF Test CFF Fallback";
3 | src: url("SFNT-CFF-Fallback.otf") format("opentype");
4 | }
5 |
6 | @font-face {
7 | font-family: "WOFF Test CFF Reference";
8 | src: url("SFNT-CFF-Reference.otf") format("opentype");
9 | }
10 |
11 | @font-face {
12 | font-family: "WOFF Test TTF Fallback";
13 | src: url("SFNT-TTF-Fallback.ttf") format("truetype");
14 | }
15 |
16 | @font-face {
17 | font-family: "WOFF Test TTF Reference";
18 | src: url("SFNT-TTF-Reference.ttf") format("truetype");
19 | }
20 |
--------------------------------------------------------------------------------
/generators/resources/notes.txt:
--------------------------------------------------------------------------------
1 | This directory contains the resources needed for generating the various test cases.
2 |
3 | -----
4 | Fonts
5 | -----
6 |
7 | The fonts are given in both .ttx and SFNT format. The .ttx files are the source and the SFNT are compiled from those using TTX. (TTX can be obtained at https://github.com/behdad/fonttools).
8 |
9 | The fonts have the following glyph/unicode to visual representation mapping:
10 |
11 | SFNT-CFF.otf and SFNT-TTF.ttf
12 | P PASS
13 | F FAIL
14 |
15 | SFNT-CFF-Fallback.otf and SFNT-TTF-Fallback.ttf
16 | P FAIL
17 | F PASS
18 |
19 | SFNT-CFF-Reference.otf and SFNT-TTF-Reference.ttf
20 | P PASS
21 | F FAIL
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | woff2-tests
2 | ===========
3 |
4 | Test suite for WOFF 2.0
5 |
6 | The test cases here are generated by the Python scripts in the generators directory.
7 |
8 | * `AuthoringToolTestCaseGenerator.py`
9 | * `DecoderTestCaseGenerator.py`
10 | * `FormatTestCaseGenerator.py`
11 | * `UserAgentTestCaseGenerator.py`
12 |
13 | The scripts are dependent on the following packages:
14 |
15 | * FontTools https://github.com/behdad/fonttools
16 | * Brotli (Python bindings) https://github.com/google/brotli
17 |
18 | To compile a particular test suite, simply run the relevant script:
19 |
20 | >>> python AuthoringToolTestCaseGenerator.py
21 |
22 | >>> python DecoderTestCaseGenerator.py
23 |
24 | >>> python FormatTestCaseGenerator.py
25 |
26 | >>> python UserAgentTestCaseGenerator.py
27 |
28 | # References
29 | http://www.w3.org/Fonts/WG/wiki/Main_Page contains the test specifications
30 |
--------------------------------------------------------------------------------
/generators/resources/index.css:
--------------------------------------------------------------------------------
1 | body {
2 | font-family: "Helvetica Neue", "Helvetica", sans-serif;
3 | line-height: 175%;
4 | font-size: 15px;
5 | padding: 20px;
6 | }
7 |
8 | p {
9 | margin: 0px;
10 | padding: 0px;
11 | }
12 |
13 | a {
14 | color: black;
15 | border-bottom: 1px solid #CCC;
16 | text-decoration: none;
17 | }
18 |
19 | a:hover {
20 | background-color: #FF9;
21 | }
22 |
23 | /* super headline */
24 |
25 | h1 {
26 | font-size: 30px;
27 | font-weight: bold;
28 | margin: 0px;
29 | padding: 0px 0px 20px 0px;
30 | }
31 |
32 | .mainNote {
33 | margin: 0px;
34 | padding: 15px 0px 15px 0px;
35 | border-top: 1px solid #CCC;
36 | }
37 |
38 | /* test category */
39 |
40 | h2.testCategory {
41 | padding: 5px 0px 5px 10px;
42 | margin: 40px 0px 14px 0px;
43 | color: white;
44 | background-color: black;
45 | font-size: 15px;
46 | font-weight: bold;
47 |
48 | }
49 |
50 | .testCategoryNote {
51 | background-color: #e6e6e6;
52 | padding: 5px 10px 5px 10px;
53 | }
54 |
55 | /* test case */
56 |
57 | .testCase {
58 | border-bottom: 3px solid black;
59 | margin: 15px 0px 0px 0px;
60 | padding: 0px 0px 15px 0px;
61 | }
62 |
63 | /* test case overview */
64 |
65 | .testCaseOverview {
66 | padding: 0px 0px 10px 0px;
67 | }
68 |
69 | .testCaseOverview h3 {
70 | font-size: 15px;
71 | font-weight: bold;
72 | margin: 0px;
73 | padding: 0px;
74 | }
75 |
76 | /* test case details */
77 |
78 | .testCaseDetails {
79 | border-top: 1px solid #CCC;
80 | padding: 10px 0px 0px 0px;
81 | }
82 |
83 | .testCaseDetails p {
84 | padding: 2px 0px 2px 0px;
85 | }
86 |
87 | /* test case pages */
88 |
89 | .testCasePages {
90 | float: left;
91 | width: 50%;
92 | }
93 |
94 | .testCasePages a {
95 | color: red;
96 | }
97 |
98 | /* test case expectations */
99 |
100 | .testCaseExpectations {
101 | display: inline-block;
102 | width: 50%;
103 | }
104 |
105 |
--------------------------------------------------------------------------------
/generators/testCaseGeneratorLib/paths.py:
--------------------------------------------------------------------------------
1 | """
2 | Paths to important directories and files.
3 | """
4 |
5 | import os
6 |
7 | def dirname(path, depth=1):
8 | """
9 | >>> path = "/5/4/3/2/1"
10 | >>> dirname(path)
11 | '/5/4/3/2'
12 | >>> dirname(path, 2)
13 | '/5/4/3'
14 | >>> dirname(path, 3)
15 | '/5/4'
16 | >>> dirname(path, 4)
17 | '/5'
18 | """
19 | for i in range(depth):
20 | path = os.path.dirname(path)
21 | return path
22 |
23 | mainDirectory = dirname(__file__)
24 | mainDirectory = dirname(mainDirectory, 2)
25 |
26 | # directory for SFNT data, test case templates,
27 | resourcesDirectory = os.path.join(mainDirectory, "generators", "resources")
28 | # paths to specific resources
29 | sfntCFFSourcePath = os.path.join(resourcesDirectory, "SFNT-CFF.otf")
30 | sfntTTFSourcePath = os.path.join(resourcesDirectory, "SFNT-TTF.ttf")
31 | sfntTTFCompositeSourcePath = os.path.join(resourcesDirectory, "SFNT-TTF-Composite.ttf")
32 |
33 | # directories for test output
34 | userAgentDirectory = os.path.join(mainDirectory, "UserAgent")
35 | userAgentTestDirectory = os.path.join(userAgentDirectory, "WOFF2")
36 | userAgentTestResourcesDirectory = os.path.join(userAgentTestDirectory, "support")
37 |
38 | formatDirectory = os.path.join(mainDirectory, "Format")
39 | formatTestDirectory = os.path.join(formatDirectory, "Tests", "xhtml1")
40 | formatResourcesDirectory = os.path.join(formatTestDirectory, "resources")
41 |
42 | authoringToolDirectory = os.path.join(mainDirectory, "AuthoringTool")
43 | authoringToolTestDirectory = os.path.join(authoringToolDirectory, "Tests", "xhtml1")
44 | authoringToolResourcesDirectory = os.path.join(authoringToolTestDirectory, "resources")
45 |
46 | decoderDirectory = os.path.join(mainDirectory, "Decoder")
47 | decoderTestDirectory = os.path.join(decoderDirectory, "Tests", "xhtml1")
48 | decoderResourcesDirectory = os.path.join(decoderTestDirectory, "resources")
49 |
50 | if __name__ == "__main__":
51 | import doctest
52 | doctest.testmod()
53 |
--------------------------------------------------------------------------------
/generators/testCaseGeneratorLib/utilities.py:
--------------------------------------------------------------------------------
1 | """
2 | Miscellaneous utilities.
3 | """
4 |
5 | import struct
6 | from fontTools.misc import sstruct
7 | from fontTools.ttLib import getSearchRange
8 | from fontTools.ttLib.sfnt import calcChecksum,\
9 | SFNTDirectoryEntry, sfntDirectoryFormat, sfntDirectorySize, sfntDirectoryEntryFormat, sfntDirectoryEntrySize
10 |
11 | # -------
12 | # Padding
13 | # -------
14 |
15 | def calcPaddingLength(length):
16 | """
17 | Calculate how much padding is needed for 4-byte alignment.
18 | """
19 | if not length % 4:
20 | return 0
21 | return 4 - (length % 4)
22 |
23 | def padData(data):
24 | """
25 | Pad with null bytes.
26 | """
27 | data += b"\0" * calcPaddingLength(len(data))
28 | return data
29 |
30 | # ---------
31 | # Checksums
32 | # ---------
33 |
34 | def sumDataULongs(data):
35 | longs = struct.unpack(">%dL" % (len(data) / 4), data)
36 | value = sum(longs) % (2 ** 32)
37 | return value
38 |
39 | def calcChecksum(data):
40 | data = padData(data)
41 | return sumDataULongs(data)
42 |
43 | def calcTableChecksum(tag, data):
44 | """
45 | Calculate the checksum for the given table.
46 | """
47 | if tag == "head":
48 | checksum = calcChecksum(data[:8] + b"\0\0\0\0" + data[12:])
49 | else:
50 | checksum = calcChecksum(data)
51 | return checksum & 0xffffffff
52 |
53 | def calcHeadCheckSumAdjustment(directory, tableData, flavor=None):
54 | """
55 | Set the checkSumAdjustment in the head table data.
56 | This works with the WOFF table data structure used throughout the suites.
57 | """
58 | # first make sure that the head table is not compressed
59 | for entry in directory:
60 | if entry["tag"] != "head":
61 | continue
62 | origLength = entry["origLength"]
63 | compLength = entry["compLength"]
64 | assert origLength == compLength
65 | break
66 | # repack the data for the SFNT calculator
67 | sfntDirectory = []
68 | offset = sfntDirectorySize + (sfntDirectoryEntrySize * len(directory))
69 | for entry in directory:
70 | d = dict(
71 | tag=entry["tag"],
72 | offset=offset, # this should only be used for calculating the table order
73 | length=entry["origLength"],
74 | checksum=entry["origChecksum"]
75 | )
76 | sfntDirectory.append(d)
77 | offset += entry["origLength"] + calcPaddingLength(entry["origLength"])
78 | sfntTableData = {}
79 | for tag, (origData, compData) in tableData.items():
80 | sfntTableData[tag] = origData
81 | calcHeadCheckSumAdjustmentSFNT(sfntDirectory, sfntTableData, flavor=flavor)
82 | newHeadTableData = sfntTableData["head"]
83 | tableData["head"] = (newHeadTableData, newHeadTableData)
84 |
85 | def calcHeadCheckSumAdjustmentSFNT(directory, tableData, flavor=None):
86 | """
87 | Set the checkSumAdjustment in the head table data.
88 | Grumble.
89 | """
90 | # if the flavor is None, guess.
91 | if flavor is None:
92 | flavor = "\000\001\000\000"
93 | for entry in directory:
94 | if entry["tag"] == "CFF ":
95 | flavor = "OTTO"
96 | break
97 | assert flavor in (b"OTTO", b"\000\001\000\000")
98 | # make the sfnt header
99 | searchRange, entrySelector, rangeShift = getSearchRange(len(directory), 16)
100 | sfntHeaderData = dict(
101 | sfntVersion=flavor,
102 | numTables=len(directory),
103 | searchRange=searchRange,
104 | entrySelector=entrySelector,
105 | rangeShift=rangeShift
106 | )
107 | sfntData = sstruct.pack(sfntDirectoryFormat, sfntHeaderData)
108 | # make a SFNT table directory
109 | directory = [(entry["tag"], entry) for entry in directory]
110 | for tag, entry in sorted(directory):
111 | sfntEntry = SFNTDirectoryEntry()
112 | sfntEntry.tag = entry["tag"]
113 | sfntEntry.checkSum = entry["checksum"]
114 | sfntEntry.offset = entry["offset"]
115 | sfntEntry.length = entry["length"]
116 | sfntData += sfntEntry.toString()
117 | # calculate the checksum
118 | sfntDataChecksum = calcChecksum(sfntData)
119 | # gather all of the checksums
120 | checksums = [entry["checksum"] for o, entry in directory]
121 | checksums.append(sfntDataChecksum)
122 | # calculate the checksum
123 | checkSumAdjustment = sum(checksums)
124 | checkSumAdjustment = (0xB1B0AFBA - checkSumAdjustment) & 0xffffffff
125 | # set the value in the head table
126 | headTableData = tableData["head"]
127 | newHeadTableData = headTableData[:8]
128 | newHeadTableData += struct.pack(">L", checkSumAdjustment)
129 | newHeadTableData += headTableData[12:]
130 | tableData["head"] = newHeadTableData
131 |
132 | # --------
133 | # Metadata
134 | # --------
135 |
136 | def stripMetadata(metadata):
137 | """
138 | Strip leading and trailing whitespace from each line in the metadata.
139 | This is needed because of the indenting in the test case generation functions.
140 | """
141 | metadata = metadata.strip()
142 | metadata = metadata.replace(" ", "\t")
143 | return "\n".join([line.strip() for line in metadata.splitlines()])
144 |
--------------------------------------------------------------------------------
/generators/testCaseGeneratorLib/sfnt.py:
--------------------------------------------------------------------------------
1 | """
2 | SFNT data extractor.
3 | """
4 |
5 | import brotli
6 | import struct
7 | from collections import OrderedDict
8 | from fontTools.misc import sstruct
9 | from fontTools.ttLib import TTFont, getSearchRange
10 | from fontTools.ttLib.sfnt import \
11 | SFNTDirectoryEntry, sfntDirectoryFormat, sfntDirectorySize, sfntDirectoryEntryFormat, sfntDirectoryEntrySize, \
12 | ttcHeaderFormat, ttcHeaderSize
13 | from testCaseGeneratorLib.utilities import padData, calcPaddingLength, calcHeadCheckSumAdjustmentSFNT, calcTableChecksum
14 | from testCaseGeneratorLib.woff import packTestCollectionDirectory, packTestDirectory, packTestCollectionHeader, packTestHeader, transformTable
15 |
16 | def getTTFont(path, **kwargs):
17 | return TTFont(path, recalcTimestamp=False, **kwargs)
18 |
19 | # ---------
20 | # Unpacking
21 | # ---------
22 |
23 | def getSFNTData(pathOrFile, unsortGlyfLoca=False, glyphBBox="", alt255UInt16=False):
24 | if isinstance(pathOrFile, TTFont):
25 | font = pathOrFile
26 | else:
27 | font = getTTFont(pathOrFile)
28 | tableChecksums = {}
29 | tableData = {}
30 | tableOrder = [i for i in sorted(font.keys()) if len(i) == 4]
31 | if unsortGlyfLoca:
32 | assert "loca" in tableOrder
33 | loca = tableOrder.index("loca")
34 | glyf = tableOrder.index("glyf")
35 | tableOrder.insert(glyf, tableOrder.pop(loca))
36 | for tag in tableOrder:
37 | tableChecksums[tag] = font.reader.tables[tag].checkSum
38 | tableData[tag] = transformTable(font, tag, glyphBBox=glyphBBox, alt255UInt16=alt255UInt16)
39 | totalData = b"".join([tableData[tag][1] for tag in tableOrder])
40 | compData = brotli.compress(totalData, brotli.MODE_FONT)
41 | if len(compData) >= len(totalData):
42 | compData = totalData
43 | if not isinstance(pathOrFile, TTFont):
44 | font.close()
45 | del font
46 | return tableData, compData, tableOrder, tableChecksums
47 |
48 | def getSFNTCollectionData(pathOrFiles, modifyNames=True, reverseNames=False, DSIG=False, duplicates=[], shared=[]):
49 | tables = []
50 | offsets = {}
51 | fonts = []
52 |
53 | for pathOrFile in pathOrFiles:
54 | if isinstance(pathOrFile, TTFont):
55 | fonts.append(pathOrFile)
56 | else:
57 | fonts.append(getTTFont(pathOrFile))
58 | numFonts = len(fonts)
59 |
60 | header = dict(
61 | TTCTag="ttcf",
62 | Version=0x00010000,
63 | numFonts=numFonts,
64 | )
65 |
66 | if DSIG:
67 | header["version"] = 0x00020000
68 |
69 | fontData = sstruct.pack(ttcHeaderFormat, header)
70 | offset = ttcHeaderSize + (numFonts * struct.calcsize(">L"))
71 | if DSIG:
72 | offset += 3 * struct.calcsize(">L")
73 |
74 | for font in fonts:
75 | fontData += struct.pack(">L", offset)
76 | tags = [i for i in sorted(font.keys()) if len(i) == 4]
77 | offset += sfntDirectorySize + (len(tags) * sfntDirectoryEntrySize)
78 |
79 | if DSIG:
80 | data = b"\0" * 4
81 | tables.append(data)
82 | offset += len(data)
83 | fontData += struct.pack(">4s", b"DSIG")
84 | fontData += struct.pack(">L", len(data))
85 | fontData += struct.pack(">L", offset)
86 |
87 | for i, font in enumerate(fonts):
88 | # Make the name table unique
89 | if modifyNames:
90 | index = i
91 | if reverseNames:
92 | index = len(fonts) - i - 1
93 | name = font["name"]
94 | for namerecord in name.names:
95 | nameID = namerecord.nameID
96 | string = namerecord.toUnicode()
97 | if nameID == 1:
98 | namerecord.string = "%s %d" % (string, index)
99 | elif nameID == 4:
100 | namerecord.string = string.replace("Regular", "%d Regular" % index)
101 | elif nameID == 6:
102 | namerecord.string = string.replace("-", "%d-" % index)
103 |
104 | tags = [i for i in sorted(font.keys()) if len(i) == 4]
105 |
106 | searchRange, entrySelector, rangeShift = getSearchRange(len(tags), 16)
107 | offsetTable = dict(
108 | sfntVersion=font.sfntVersion,
109 | numTables=len(tags),
110 | searchRange=searchRange,
111 | entrySelector=entrySelector,
112 | rangeShift=rangeShift,
113 | )
114 |
115 | fontData += sstruct.pack(sfntDirectoryFormat, offsetTable)
116 |
117 | for tag in tags:
118 | data = font.getTableData(tag)
119 | checksum = calcTableChecksum(tag, data)
120 | entry = dict(
121 | tag=tag,
122 | offset=offset,
123 | length=len(data),
124 | checkSum=checksum,
125 | )
126 |
127 | if (shared and tag not in shared) or tag in duplicates or data not in tables:
128 | tables.append(data)
129 | offsets[checksum] = offset
130 | offset += len(data) + calcPaddingLength(len(data))
131 | else:
132 | entry["offset"] = offsets[checksum]
133 |
134 | fontData += sstruct.pack(sfntDirectoryEntryFormat, entry)
135 |
136 | for table in tables:
137 | fontData += padData(table)
138 |
139 | return fontData
140 |
141 | def getWOFFCollectionData(pathOrFiles, MismatchGlyfLoca=False, reverseNames=False):
142 | from testCaseGeneratorLib.defaultData import defaultTestData
143 |
144 | tableChecksums = []
145 | tableData = []
146 | tableOrder = []
147 | collectionDirectory = []
148 | locaIndices = []
149 | fonts = []
150 |
151 | for pathOrFile in pathOrFiles:
152 | if isinstance(pathOrFile, TTFont):
153 | fonts.append(pathOrFile)
154 | else:
155 | fonts.append(getTTFont(pathOrFile))
156 |
157 | for i, font in enumerate(fonts):
158 | index = i
159 | if reverseNames:
160 | index = len(fonts) - i - 1
161 |
162 | # Make the name table unique
163 | name = font["name"]
164 | for namerecord in name.names:
165 | nameID = namerecord.nameID
166 | string = namerecord.toUnicode()
167 | if nameID == 1:
168 | namerecord.string = "%s %d" % (string, index)
169 | elif nameID == 4:
170 | namerecord.string = string.replace("Regular", "%d Regular" % index)
171 | elif nameID == 6:
172 | namerecord.string = string.replace("-", "%d-" % index)
173 |
174 | tags = [i for i in sorted(font.keys()) if len(i) == 4]
175 | if "glyf" in tags:
176 | glyf = tags.index("glyf")
177 | loca = tags.index("loca")
178 | tags.insert(glyf + 1, tags.pop(loca))
179 | tableIndices = OrderedDict()
180 | for tag in tags:
181 | data = transformTable(font, tag)
182 | if MismatchGlyfLoca and tag in ("glyf", "loca"):
183 | tableData.append([tag, data])
184 | tableChecksums.append([tag, font.reader.tables[tag].checkSum])
185 | tableOrder.append(tag)
186 | tableIndex = len(tableData) - 1
187 | tableIndices[tag] = tableIndex
188 | if tag == "loca":
189 | locaIndices.append(tableIndex)
190 | else:
191 | if [tag, data] not in tableData:
192 | tableData.append([tag, data])
193 | tableChecksums.append([tag, font.reader.tables[tag].checkSum])
194 | tableOrder.append(tag)
195 | tableIndices[tag] = tableData.index([tag, data])
196 | collectionDirectory.append(dict(numTables=len(tableIndices),
197 | flavor=bytes(font.sfntVersion, "utf-8"),
198 | index=tableIndices))
199 | font.close()
200 | del font
201 |
202 | if MismatchGlyfLoca:
203 | locaIndices.reverse()
204 | for i, entry in enumerate(collectionDirectory):
205 | entry["index"]["loca"] = locaIndices[i]
206 | totalData = b"".join([data[1][1] for data in tableData])
207 | compData = brotli.compress(totalData, brotli.MODE_FONT)
208 | if len(compData) >= len(totalData):
209 | compData = totalData
210 |
211 | directory = [dict(tag=tag, origLength=0, transformLength=0, transformFlag=0) for tag in tableOrder]
212 |
213 | header, directory, collectionHeader, collectionDirectory, tableData = defaultTestData(directory=directory,
214 | tableData=tableData, compressedData=compData, collectionDirectory=collectionDirectory)
215 |
216 | data = packTestHeader(header)
217 | data += packTestDirectory(directory, isCollection=True)
218 | data += packTestCollectionHeader(collectionHeader)
219 | data += packTestCollectionDirectory(collectionDirectory)
220 | data += tableData
221 |
222 | data = padData(data)
223 |
224 | return data
225 |
226 | # -------
227 | # Packing
228 | # -------
229 |
230 | def packSFNT(header, directory, tableData, flavor="cff",
231 | calcCheckSum=True, applyPadding=True, sortDirectory=True,
232 | searchRange=None, entrySelector=None, rangeShift=None):
233 | # update the checkSum
234 | if calcCheckSum:
235 | if flavor == "cff":
236 | f = b"OTTO"
237 | else:
238 | f = b"\000\001\000\000"
239 | calcHeadCheckSumAdjustmentSFNT(directory, tableData, flavor=f)
240 | # update the header
241 | cSearchRange, cEntrySelector, cRangeShift = getSearchRange(len(directory), 16)
242 | if searchRange is None:
243 | searchRange = cSearchRange
244 | if entrySelector is None:
245 | entrySelector = cEntrySelector
246 | if rangeShift is None:
247 | rangeShift = cRangeShift
248 | if flavor == "cff":
249 | header["sfntVersion"] = b"OTTO"
250 | else:
251 | header["sfntVersion"] = b"\000\001\000\000"
252 | header["searchRange"] = searchRange
253 | header["entrySelector"] = entrySelector
254 | header["rangeShift"] = rangeShift
255 | # version and num tables should already be set
256 | sfntData = sstruct.pack(sfntDirectoryFormat, header)
257 | # compile the directory
258 | sfntDirectoryEntries = {}
259 | entryOrder = []
260 | for entry in directory:
261 | sfntEntry = SFNTDirectoryEntry()
262 | sfntEntry.tag = entry["tag"]
263 | sfntEntry.checkSum = entry["checksum"]
264 | sfntEntry.offset = entry["offset"]
265 | sfntEntry.length = entry["length"]
266 | sfntDirectoryEntries[entry["tag"]] = sfntEntry
267 | entryOrder.append(entry["tag"])
268 | if sortDirectory:
269 | entryOrder = sorted(entryOrder)
270 | for tag in entryOrder:
271 | entry = sfntDirectoryEntries[tag]
272 | sfntData += entry.toString()
273 | # compile the data
274 | directory = [(entry["offset"], entry["tag"]) for entry in directory]
275 | for o, tag in sorted(directory):
276 | data = tableData[tag]
277 | if applyPadding:
278 | data = padData(data)
279 | sfntData += data
280 | # done
281 | return sfntData
282 |
--------------------------------------------------------------------------------
/generators/resources/SFNT-CFF.ttx:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
108 |
109 |
110 |
111 | WOFF Test CFF
112 |
113 |
114 | Regular
115 |
116 |
117 | 1.000;None;WOFFTestCFF-Regular
118 |
119 |
120 | WOFF Test CFF
121 |
122 |
123 | Version 1.000;PS 001.000;hotconv 1.0.57;makeotf.lib2.0.21895 DEVELOPMENT
124 |
125 |
126 | WOFFTestCFF-Regular
127 |
128 |
129 | WebFonts Working Group
130 |
131 |
132 | WOFF Test CFF
133 |
134 |
135 | Regular
136 |
137 |
138 | 1.000;None;WOFFTestCFF-Regular
139 |
140 |
141 | WOFFTestCFF-Regular
142 |
143 |
144 | Version 1.000;PS 001.000;hotconv 1.0.57;makeotf.lib2.0.21895 DEVELOPMENT
145 |
146 |
147 | WOFFTestCFF-Regular
148 |
149 |
150 | WebFonts Working Group
151 |
152 |
153 |
154 |
155 |
156 |
157 |
158 |
159 |
160 |
161 |
162 |
163 |
164 |
165 |
166 |
167 |
168 |
169 |
170 |
171 |
172 |
173 |
174 |
175 |
176 |
177 |
178 |
179 |
180 |
181 |
182 |
183 |
184 |
185 |
186 |
187 |
188 |
189 |
190 |
191 |
192 |
193 |
194 |
195 |
196 |
197 |
198 |
199 |
200 |
201 |
202 |
203 |
204 |
205 |
206 |
207 |
208 |
209 |
210 |
211 |
212 |
213 |
214 |
215 |
216 |
217 |
218 |
219 |
220 |
221 |
222 |
223 |
224 |
225 |
226 |
227 |
228 |
229 |
230 |
231 |
232 |
233 |
234 |
235 |
236 |
237 |
238 |
239 |
240 |
241 |
242 |
243 |
244 |
245 |
246 |
247 |
248 |
249 |
250 |
251 |
252 |
253 |
254 |
255 |
256 |
257 |
258 |
259 |
260 |
261 |
262 |
263 |
264 |
265 |
266 |
267 |
268 |
269 |
270 |
271 |
272 |
273 |
274 |
275 |
276 |
277 |
278 |
279 |
280 |
281 |
282 |
283 |
284 | endchar
285 |
286 |
287 | -102 787 hmoveto
288 | 31 119 rlineto
289 | 203 hlineto
290 | 32 -119 rlineto
291 | 246 hlineto
292 | -213 700 rlineto
293 | -296 hlineto
294 | -213 -700 rlineto
295 | -477 hmoveto
296 | 237 244 245 162 -245 130 287 164 -524 hlineto
297 | 1586 -700 rmoveto
298 | 518 180 -281 520 -237 hlineto
299 | -332 -700 rmoveto
300 | 237 700 -237 hlineto
301 | -434 -196 rmoveto
302 | 60 -228 rlineto
303 | -120 hlineto
304 | endchar
305 |
306 |
307 | -107 394 306 rmoveto
308 | 153 86 82 115 115 -86 82 -153 hvcurveto
309 | -294 -15 91 -670 -91 -15 297 15 -101 291 hlineto
310 | 632 -291 rmoveto
311 | -15 276 15 -67 vlineto
312 | -254 690 rlineto
313 | -12 hlineto
314 | -219 -593 rlineto
315 | -69 -25 -46 -28 -59 hhcurveto
316 | -5 -15 259 15 -26 hlineto
317 | -67 -35 34 71 26 hvcurveto
318 | 43 116 rlineto
319 | 232 hlineto
320 | 78 -221 rlineto
321 | 599 695 rmoveto
322 | -28 -11 -5 -6 -14 -22 -44 34 -78 hhcurveto
323 | -124 -91 -73 -106 -95 52 -64 147 -61 hvcurveto
324 | 126 -52 30 -44 -79 vvcurveto
325 | -81 -55 -50 -81 -103 -85 70 168 -17 vhcurveto
326 | -15 -253 15 hlineto
327 | 22 4 13 15 18 45 32 -37 98 hhcurveto
328 | 130 92 82 116 83 -32 66 -173 73 hvcurveto
329 | -120 50 -33 43 72 vvcurveto
330 | 71 51 49 82 90 65 -92 -119 13 vhcurveto
331 | 15 226 hlineto
332 | -1118 -207 rmoveto
333 | -119 -49 -63 -96 vhcurveto
334 | -82 364 82 hlineto
335 | 96 49 -63 -119 hvcurveto
336 | 312 53 rmoveto
337 | 108 -305 rlineto
338 | -221 hlineto
339 | 1420 459 rmoveto
340 | -28 -11 -5 -6 -14 -22 -44 34 -78 hhcurveto
341 | -124 -91 -73 -106 -95 52 -64 147 -61 hvcurveto
342 | 126 -52 30 -44 -79 vvcurveto
343 | -81 -55 -50 -81 -103 -85 70 168 -17 vhcurveto
344 | -15 -253 15 hlineto
345 | 22 4 13 15 18 45 32 -37 98 hhcurveto
346 | 130 92 82 116 83 -32 66 -173 73 hvcurveto
347 | -120 50 -33 43 72 vvcurveto
348 | 71 51 49 82 90 65 -92 -119 13 vhcurveto
349 | 15 226 hlineto
350 | endchar
351 |
352 |
353 | endchar
354 |
355 |
356 |
357 |
358 |
359 |
360 |
361 |
362 |
363 |
364 |
365 |
366 |
367 |
368 |
369 |
370 |
371 |
--------------------------------------------------------------------------------
/generators/testCaseGeneratorLib/defaultData.py:
--------------------------------------------------------------------------------
1 | """
2 | Default data for the test cases.
3 | """
4 |
5 | import brotli
6 | from copy import deepcopy
7 | from fontTools.ttLib.sfnt import sfntDirectoryFormat, sfntDirectorySize, sfntDirectoryEntryFormat, sfntDirectoryEntrySize
8 | from testCaseGeneratorLib.sfnt import getSFNTData
9 | from testCaseGeneratorLib.woff import packTestDirectory, packTestCollectionHeader, packTestCollectionDirectory, woffHeaderSize, knownTableTags
10 | from testCaseGeneratorLib.paths import sfntCFFSourcePath, sfntTTFSourcePath
11 | from testCaseGeneratorLib.utilities import calcPaddingLength, calcTableChecksum
12 |
13 | # ---------
14 | # SFNT Data
15 | # ---------
16 |
17 | originalSFNTChecksums = {}
18 |
19 | sfntTTFTableData, sfntTTFCompressedData, sfntTTFTableOrder, sfntTTFTableChecksums = getSFNTData(sfntTTFSourcePath)
20 | for tag, checksum in sfntTTFTableChecksums.items():
21 | data = sfntTTFTableData[tag]
22 | if data in originalSFNTChecksums:
23 | assert originalSFNTChecksums[data] == checksum
24 | originalSFNTChecksums[data] = checksum
25 |
26 | sfntCFFTableData, sfntCFFCompressedData, sfntCFFTableOrder, sfntCFFTableChecksums = getSFNTData(sfntCFFSourcePath)
27 | for tag, checksum in sfntCFFTableChecksums.items():
28 | data = sfntCFFTableData[tag]
29 | if data in originalSFNTChecksums:
30 | assert originalSFNTChecksums[data] == checksum
31 | originalSFNTChecksums[data] = checksum
32 |
33 | # --------
34 | # Metadata
35 | # --------
36 |
37 | testDataWOFFMetadata = """
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 | Description without language.
49 |
50 |
51 | Description with "en" language.
52 |
53 |
54 | Description with "fr" language.
55 |
56 |
57 |
58 |
59 | License without language.
60 |
61 |
62 | License with "en" language.
63 |
64 |
65 | License with "fr" language.
66 |
67 |
68 |
69 |
70 | Copyright without language.
71 |
72 |
73 | Copyright with "en" language.
74 |
75 |
76 | Copyright with "fr" language.
77 |
78 |
79 |
80 |
81 | Trademark without language.
82 |
83 |
84 | Trademark with "en" language.
85 |
86 |
87 | Trademark with "fr" language.
88 |
89 |
90 |
91 |
92 | Extension 1 - Name Without Language
93 | Extension 1 - Name With "en" Language
94 | Extension 1 - Name With "fr" Language
95 | -
96 |
Extension 1 - Item 1 - Name Without Language
97 | Extension 1 - Item 1 - Name With "en" Language
98 | Extension 1 - Item 1 - Name With "fr" Language
99 | Extension 1 - Item 1 - Value Without Language
100 | Extension 1 - Item 1 - Value With "en" Language
101 | Extension 1 - Item 1 - Value With "fr" Language
102 |
103 | -
104 |
Extension 1 - Item 2 - Name Without Language
105 | Extension 1 - Item 2 - Name With "en" Language
106 | Extension 1 - Item 2 - Name With "fr" Language
107 | Extension 1 - Item 2 - Value Without Language
108 | Extension 1 - Item 2 - Value With "en" Language
109 | Extension 1 - Item 2 - Value With "fr" Language
110 |
111 |
112 |
113 | Extension 2 - Name Without Language
114 | Extension 2 - Name With "en" Language
115 | Extension 2 - Name With "fr" Language
116 | -
117 |
Extension 2 - Item 1 - Name Without Language
118 | Extension 2 - Item 1 - Name With "en" Language
119 | Extension 2 - Item 1 - Name With "fr" Language
120 | Extension 2 - Item 1 - Value Without Language
121 | Extension 2 - Item 1 - Value With "en" Language
122 | Extension 2 - Item 1 - Value With "fr" Language
123 |
124 | -
125 |
Extension 2 - Item 2 - Name Without Language
126 | Extension 2 - Item 2 - Name With "en" Language
127 | Extension 2 - Item 2 - Name With "fr" Language
128 | Extension 2 - Item 2 - Value Without Language
129 | Extension 2 - Item 2 - Value With "en" Language
130 | Extension 2 - Item 2 - Value With "fr" Language
131 |
132 | -
133 |
Extension 2 - Item 3 - Name Without Language
134 | Extension 2 - Item 3 - Name With "en" Language
135 | Extension 2 - Item 3 - Name With "fr" Language
136 | Extension 2 - Item 3 - Value Without Language
137 | Extension 2 - Item 3 - Value With "en" Language
138 |
139 |
140 |
141 | """.strip()
142 |
143 | # ------------
144 | # Private Data
145 | # ------------
146 |
147 | testDataWOFFPrivateData = b"\0" * 100
148 |
149 | # -----------------------
150 | # Default Data Structures
151 | # -----------------------
152 |
153 | # WOFF
154 |
155 | testDataWOFFHeader = dict(
156 | signature="wOF2",
157 | flavor="OTTO",
158 | length=0,
159 | reserved=0,
160 | numTables=0,
161 | totalSfntSize=0,
162 | totalCompressedSize=0,
163 | majorVersion=0,
164 | minorVersion=0,
165 | metaOffset=0,
166 | metaLength=0,
167 | metaOrigLength=0,
168 | privOffset=0,
169 | privLength=0
170 | )
171 |
172 | testTTFDataWOFFDirectory = []
173 | for tag in sfntTTFTableOrder:
174 | d = dict(
175 | tag=tag,
176 | origLength=0,
177 | transformLength=0,
178 | transformFlag=0,
179 | )
180 | testTTFDataWOFFDirectory.append(d)
181 |
182 | testCFFDataWOFFDirectory = []
183 | for tag in sfntCFFTableOrder:
184 | d = dict(
185 | tag=tag,
186 | origLength=0,
187 | transformLength=0,
188 | transformFlag=0,
189 | )
190 | testCFFDataWOFFDirectory.append(d)
191 |
192 | # SFNT
193 |
194 | testDataSFNTHeader = dict(
195 | sfntVersion=b"OTTO",
196 | numTables=0,
197 | searchRange=0,
198 | entrySelector=0,
199 | rangeShift=0
200 | )
201 |
202 | testTTFDataSFNTDirectory = []
203 | for tag in sfntTTFTableOrder:
204 | d = dict(
205 | tag=tag,
206 | offset=0,
207 | length=0,
208 | checksum=0
209 | )
210 | testTTFDataSFNTDirectory.append(d)
211 |
212 | testCFFDataSFNTDirectory = []
213 | for tag in sfntCFFTableOrder:
214 | d = dict(
215 | tag=tag,
216 | offset=0,
217 | length=0,
218 | checksum=0
219 | )
220 | testCFFDataSFNTDirectory.append(d)
221 |
222 | # --------------------
223 | # Default Data Creator
224 | # --------------------
225 |
226 | def defaultTestData(header=None, directory=None, collectionHeader=None, collectionDirectory=None, tableData=None, compressedData=None, metadata=None, privateData=None, flavor="cff", Base128Bug=False, knownTags=knownTableTags, skipTransformLength=False):
227 | isCollection = collectionDirectory is not None
228 | parts = []
229 | # setup the header
230 | if header is None:
231 | header = deepcopy(testDataWOFFHeader)
232 | parts.append(header)
233 | # setup the directory
234 | if directory is None:
235 | if flavor == "cff":
236 | directory = deepcopy(testCFFDataWOFFDirectory)
237 | else:
238 | directory = deepcopy(testTTFDataWOFFDirectory)
239 | parts.append(directory)
240 | if isCollection:
241 | if collectionHeader is None:
242 | collectionHeader = dict(version=0x00010000, numFonts=len(collectionDirectory))
243 | parts.append(collectionHeader)
244 | parts.append(collectionDirectory)
245 | # setup the table data
246 | if tableData is None:
247 | if flavor == "cff":
248 | tableData = deepcopy(sfntCFFTableData)
249 | else:
250 | tableData = deepcopy(sfntTTFTableData)
251 | if compressedData is None:
252 | if flavor == "cff":
253 | compressedData = deepcopy(sfntCFFCompressedData)
254 | else:
255 | compressedData = deepcopy(sfntTTFCompressedData)
256 | parts.append(compressedData)
257 | # sanity checks
258 | assert len(directory) == len(tableData)
259 | if not isCollection:
260 | assert set(tableData.keys()) == set([entry["tag"] for entry in directory])
261 | # apply the directory data to the header
262 | header["numTables"] = len(directory)
263 | if isCollection:
264 | header["flavor"] = b"ttcf"
265 | elif "CFF " in tableData:
266 | header["flavor"] = b"OTTO"
267 | else:
268 | header["flavor"] = b"\000\001\000\000"
269 | # apply the table data to the directory and the header
270 | if isCollection:
271 | # TTC header
272 | header["totalSfntSize"] = 12 + 4 * collectionHeader["numFonts"]
273 | header["totalSfntSize"] += sfntDirectorySize * collectionHeader["numFonts"]
274 | for entry in collectionDirectory:
275 | header["totalSfntSize"] += sfntDirectoryEntrySize * entry["numTables"]
276 | else:
277 | header["totalSfntSize"] = sfntDirectorySize + (len(directory) * sfntDirectoryEntrySize)
278 | header["totalCompressedSize"] = len(compressedData)
279 | for i, entry in enumerate(directory):
280 | tag = entry["tag"]
281 | if isCollection:
282 | origData, transformData = tableData[i][1]
283 | else:
284 | origData, transformData = tableData[tag]
285 | entry["origLength"] = len(origData)
286 | entry["transformLength"] = len(transformData)
287 | if tag == "hmtx" and entry["origLength"] > entry["transformLength"]:
288 | entry["transformFlag"] = 1
289 | header["totalSfntSize"] += entry["origLength"]
290 | header["totalSfntSize"] += calcPaddingLength(header["totalSfntSize"])
291 | header["length"] = woffHeaderSize + len(packTestDirectory(directory, knownTags=knownTags, skipTransformLength=skipTransformLength, Base128Bug=Base128Bug))
292 | if isCollection:
293 | header["length"] += len(packTestCollectionHeader(collectionHeader))
294 | header["length"] += len(packTestCollectionDirectory(collectionDirectory))
295 | header["length"] += len(compressedData)
296 | header["length"] += calcPaddingLength(header["length"])
297 | # setup the metadata
298 | if metadata is not None:
299 | if isinstance(metadata, tuple):
300 | metadata, compMetadata = metadata
301 | else:
302 | compMetadata = None
303 | if compMetadata is None:
304 | compMetadata = brotli.compress(bytes(metadata, "utf-8"), brotli.MODE_TEXT)
305 | header["metaOffset"] = header["length"]
306 | header["metaLength"] = len(compMetadata)
307 | header["metaOrigLength"] = len(metadata)
308 | header["length"] += len(compMetadata)
309 | if privateData is not None:
310 | header["length"] += calcPaddingLength(len(compMetadata))
311 | parts.append((metadata, compMetadata))
312 | # setup the private data
313 | if privateData is not None:
314 | header["privOffset"] = header["length"]
315 | header["privLength"] = len(privateData)
316 | header["length"] += len(privateData)
317 | parts.append(privateData)
318 | # return the parts
319 | return parts
320 |
321 | # -------------------------
322 | # Default SFNT Data Creator
323 | # -------------------------
324 |
325 | def defaultSFNTTestData(tableData=None, flavor="cff"):
326 | parts = []
327 | # setup the header
328 | header = deepcopy(testDataSFNTHeader)
329 | parts.append(header)
330 | # setup the directory
331 | if flavor == "cff":
332 | directory = deepcopy(testCFFDataSFNTDirectory)
333 | else:
334 | directory = deepcopy(testTTFDataSFNTDirectory)
335 | parts.append(directory)
336 | # setup the table data
337 | if tableData is None:
338 | if flavor == "cff":
339 | tableData = deepcopy(sfntCFFTableData)
340 | else:
341 | tableData = deepcopy(sfntTTFTableData)
342 | for tag, (data, transformData) in tableData.items():
343 | tableData[tag] = data
344 | parts.append(tableData)
345 | # sanity checks
346 | assert len(directory) == len(tableData)
347 | assert set(tableData.keys()) == set([entry["tag"] for entry in directory])
348 | # apply the directory data to the header
349 | header["numTables"] = len(directory)
350 | if flavor == "cff":
351 | header["flavor"] = b"OTTO"
352 | else:
353 | header["flavor"] = b"\000\001\000\000"
354 | # apply the table data to the directory and the header
355 | offset = sfntDirectorySize + (len(directory) * sfntDirectoryEntrySize)
356 | for entry in directory:
357 | tag = entry["tag"]
358 | data = tableData[tag]
359 | length = len(data)
360 | # measure
361 | paddedLength = length + calcPaddingLength(length)
362 | # store
363 | entry["offset"] = offset
364 | entry["length"] = length
365 | if data in originalSFNTChecksums:
366 | checksum = originalSFNTChecksums[data]
367 | else:
368 | checksum = calcTableChecksum(tag, data)
369 | entry["checksum"] = checksum
370 | # next
371 | offset += paddedLength
372 | # return the parts
373 | return parts
374 |
--------------------------------------------------------------------------------
/generators/resources/SFNT-CFF-Fallback.ttx:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
108 |
109 |
110 |
111 | WOFF Test CFF Fallback
112 |
113 |
114 | Regular
115 |
116 |
117 | 1.000;None;WOFFTestCFFFallback-Regular
118 |
119 |
120 | WOFF Test CFF Fallback
121 |
122 |
123 | Version 1.000;PS 001.000;hotconv 1.0.57;makeotf.lib2.0.21895 DEVELOPMENT
124 |
125 |
126 | WOFFTestCFFFallback-Regular
127 |
128 |
129 | WebFonts Working Group
130 |
131 |
132 | WOFF Test CFF Fallback
133 |
134 |
135 | Regular
136 |
137 |
138 | 1.000;None;WOFFTestCFFFallback-Regular
139 |
140 |
141 | WOFFTestCFFFallback-Regular
142 |
143 |
144 | Version 1.000;PS 001.000;hotconv 1.0.57;makeotf.lib2.0.21895 DEVELOPMENT
145 |
146 |
147 | WOFFTestCFFFallback-Regular
148 |
149 |
150 | WebFonts Working Group
151 |
152 |
153 |
154 |
155 |
156 |
157 |
158 |
159 |
160 |
161 |
162 |
163 |
164 |
165 |
166 |
167 |
168 |
169 |
170 |
171 |
172 |
173 |
174 |
175 |
176 |
177 |
178 |
179 |
180 |
181 |
182 |
183 |
184 |
185 |
186 |
187 |
188 |
189 |
190 |
191 |
192 |
193 |
194 |
195 |
196 |
197 |
198 |
199 |
200 |
201 |
202 |
203 |
204 |
205 |
206 |
207 |
208 |
209 |
210 |
211 |
212 |
213 |
214 |
215 |
216 |
217 |
218 |
219 |
220 |
221 |
222 |
223 |
224 |
225 |
226 |
227 |
228 |
229 |
230 |
231 |
232 |
233 |
234 |
235 |
236 |
237 |
238 |
239 |
240 |
241 |
242 |
243 |
244 |
245 |
246 |
247 |
248 |
249 |
250 |
251 |
252 |
253 |
254 |
255 |
256 |
257 |
258 |
259 |
260 |
261 |
262 |
263 |
264 |
265 |
266 |
267 |
268 |
269 |
270 |
271 |
272 |
273 |
274 |
275 |
276 |
277 |
278 |
279 |
280 |
281 |
282 |
283 |
284 | endchar
285 |
286 |
287 | -107 394 306 rmoveto
288 | 153 86 82 115 115 -86 82 -153 hvcurveto
289 | -294 -15 91 -670 -91 -15 297 15 -101 291 hlineto
290 | 632 -291 rmoveto
291 | -15 276 15 -67 vlineto
292 | -254 690 rlineto
293 | -12 hlineto
294 | -219 -593 rlineto
295 | -69 -25 -46 -28 -59 hhcurveto
296 | -5 -15 259 15 -26 hlineto
297 | -67 -35 34 71 26 hvcurveto
298 | 43 116 rlineto
299 | 232 hlineto
300 | 78 -221 rlineto
301 | 599 695 rmoveto
302 | -28 -11 -5 -6 -14 -22 -44 34 -78 hhcurveto
303 | -124 -91 -73 -106 -95 52 -64 147 -61 hvcurveto
304 | 126 -52 30 -44 -79 vvcurveto
305 | -81 -55 -50 -81 -103 -85 70 168 -17 vhcurveto
306 | -15 -253 15 hlineto
307 | 22 4 13 15 18 45 32 -37 98 hhcurveto
308 | 130 92 82 116 83 -32 66 -173 73 hvcurveto
309 | -120 50 -33 43 72 vvcurveto
310 | 71 51 49 82 90 65 -92 -119 13 vhcurveto
311 | 15 226 hlineto
312 | -1118 -207 rmoveto
313 | -119 -49 -63 -96 vhcurveto
314 | -82 364 82 hlineto
315 | 96 49 -63 -119 hvcurveto
316 | 312 53 rmoveto
317 | 108 -305 rlineto
318 | -221 hlineto
319 | 1420 459 rmoveto
320 | -28 -11 -5 -6 -14 -22 -44 34 -78 hhcurveto
321 | -124 -91 -73 -106 -95 52 -64 147 -61 hvcurveto
322 | 126 -52 30 -44 -79 vvcurveto
323 | -81 -55 -50 -81 -103 -85 70 168 -17 vhcurveto
324 | -15 -253 15 hlineto
325 | 22 4 13 15 18 45 32 -37 98 hhcurveto
326 | 130 92 82 116 83 -32 66 -173 73 hvcurveto
327 | -120 50 -33 43 72 vvcurveto
328 | 71 51 49 82 90 65 -92 -119 13 vhcurveto
329 | 15 226 hlineto
330 | endchar
331 |
332 |
333 | -102 787 hmoveto
334 | 31 119 rlineto
335 | 203 hlineto
336 | 32 -119 rlineto
337 | 246 hlineto
338 | -213 700 rlineto
339 | -296 hlineto
340 | -213 -700 rlineto
341 | -477 hmoveto
342 | 237 244 245 162 -245 130 287 164 -524 hlineto
343 | 1586 -700 rmoveto
344 | 518 180 -281 520 -237 hlineto
345 | -332 -700 rmoveto
346 | 237 700 -237 hlineto
347 | -434 -196 rmoveto
348 | 60 -228 rlineto
349 | -120 hlineto
350 | endchar
351 |
352 |
353 | endchar
354 |
355 |
356 |
357 |
358 |
359 |
360 |
361 |
362 |
363 |
364 |
365 |
366 |
367 |
368 |
369 |
370 |
371 |
--------------------------------------------------------------------------------
/generators/resources/SFNT-CFF-Reference.ttx:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
108 |
109 |
110 |
111 | WOFF Test CFF Reference
112 |
113 |
114 | Regular
115 |
116 |
117 | 1.000;None;WOFFTestCFFReference-Regular
118 |
119 |
120 | WOFF Test CFF Reference
121 |
122 |
123 | Version 1.000;PS 001.000;hotconv 1.0.57;makeotf.lib2.0.21895 DEVELOPMENT
124 |
125 |
126 | WOFFTestCFFReference-Regular
127 |
128 |
129 | WebFonts Working Group
130 |
131 |
132 | WOFF Test CFF Reference
133 |
134 |
135 | Regular
136 |
137 |
138 | 1.000;None;WOFFTestCFFReference-Regular
139 |
140 |
141 | WOFFTestCFFReference-Regular
142 |
143 |
144 | Version 1.000;PS 001.000;hotconv 1.0.57;makeotf.lib2.0.21895 DEVELOPMENT
145 |
146 |
147 | WOFFTestCFFReference-Regular
148 |
149 |
150 | WebFonts Working Group
151 |
152 |
153 |
154 |
155 |
156 |
157 |
158 |
159 |
160 |
161 |
162 |
163 |
164 |
165 |
166 |
167 |
168 |
169 |
170 |
171 |
172 |
173 |
174 |
175 |
176 |
177 |
178 |
179 |
180 |
181 |
182 |
183 |
184 |
185 |
186 |
187 |
188 |
189 |
190 |
191 |
192 |
193 |
194 |
195 |
196 |
197 |
198 |
199 |
200 |
201 |
202 |
203 |
204 |
205 |
206 |
207 |
208 |
209 |
210 |
211 |
212 |
213 |
214 |
215 |
216 |
217 |
218 |
219 |
220 |
221 |
222 |
223 |
224 |
225 |
226 |
227 |
228 |
229 |
230 |
231 |
232 |
233 |
234 |
235 |
236 |
237 |
238 |
239 |
240 |
241 |
242 |
243 |
244 |
245 |
246 |
247 |
248 |
249 |
250 |
251 |
252 |
253 |
254 |
255 |
256 |
257 |
258 |
259 |
260 |
261 |
262 |
263 |
264 |
265 |
266 |
267 |
268 |
269 |
270 |
271 |
272 |
273 |
274 |
275 |
276 |
277 |
278 |
279 |
280 |
281 |
282 |
283 |
284 | endchar
285 |
286 |
287 | -102 787 hmoveto
288 | 31 119 rlineto
289 | 203 hlineto
290 | 32 -119 rlineto
291 | 246 hlineto
292 | -213 700 rlineto
293 | -296 hlineto
294 | -213 -700 rlineto
295 | -477 hmoveto
296 | 237 244 245 162 -245 130 287 164 -524 hlineto
297 | 1586 -700 rmoveto
298 | 518 180 -281 520 -237 hlineto
299 | -332 -700 rmoveto
300 | 237 700 -237 hlineto
301 | -434 -196 rmoveto
302 | 60 -228 rlineto
303 | -120 hlineto
304 | endchar
305 |
306 |
307 | -107 394 306 rmoveto
308 | 153 86 82 115 115 -86 82 -153 hvcurveto
309 | -294 -15 91 -670 -91 -15 297 15 -101 291 hlineto
310 | 632 -291 rmoveto
311 | -15 276 15 -67 vlineto
312 | -254 690 rlineto
313 | -12 hlineto
314 | -219 -593 rlineto
315 | -69 -25 -46 -28 -59 hhcurveto
316 | -5 -15 259 15 -26 hlineto
317 | -67 -35 34 71 26 hvcurveto
318 | 43 116 rlineto
319 | 232 hlineto
320 | 78 -221 rlineto
321 | 599 695 rmoveto
322 | -28 -11 -5 -6 -14 -22 -44 34 -78 hhcurveto
323 | -124 -91 -73 -106 -95 52 -64 147 -61 hvcurveto
324 | 126 -52 30 -44 -79 vvcurveto
325 | -81 -55 -50 -81 -103 -85 70 168 -17 vhcurveto
326 | -15 -253 15 hlineto
327 | 22 4 13 15 18 45 32 -37 98 hhcurveto
328 | 130 92 82 116 83 -32 66 -173 73 hvcurveto
329 | -120 50 -33 43 72 vvcurveto
330 | 71 51 49 82 90 65 -92 -119 13 vhcurveto
331 | 15 226 hlineto
332 | -1118 -207 rmoveto
333 | -119 -49 -63 -96 vhcurveto
334 | -82 364 82 hlineto
335 | 96 49 -63 -119 hvcurveto
336 | 312 53 rmoveto
337 | 108 -305 rlineto
338 | -221 hlineto
339 | 1420 459 rmoveto
340 | -28 -11 -5 -6 -14 -22 -44 34 -78 hhcurveto
341 | -124 -91 -73 -106 -95 52 -64 147 -61 hvcurveto
342 | 126 -52 30 -44 -79 vvcurveto
343 | -81 -55 -50 -81 -103 -85 70 168 -17 vhcurveto
344 | -15 -253 15 hlineto
345 | 22 4 13 15 18 45 32 -37 98 hhcurveto
346 | 130 92 82 116 83 -32 66 -173 73 hvcurveto
347 | -120 50 -33 43 72 vvcurveto
348 | 71 51 49 82 90 65 -92 -119 13 vhcurveto
349 | 15 226 hlineto
350 | endchar
351 |
352 |
353 | endchar
354 |
355 |
356 |
357 |
358 |
359 |
360 |
361 |
362 |
363 |
364 |
365 |
366 |
367 |
368 |
369 |
370 |
371 |
--------------------------------------------------------------------------------
/generators/testCaseGeneratorLib/woff.py:
--------------------------------------------------------------------------------
1 | """
2 | WOFF data packers.
3 | """
4 |
5 | import struct
6 | from copy import deepcopy
7 | from fontTools.misc import sstruct
8 | from fontTools.misc.arrayTools import calcIntBounds
9 | from testCaseGeneratorLib.utilities import padData, calcHeadCheckSumAdjustment
10 |
11 | # ------------------
12 | # struct Description
13 | # ------------------
14 |
15 | woffHeaderFormat = """
16 | > # big endian
17 | signature: 4s
18 | flavor: 4s
19 | length: L
20 | numTables: H
21 | reserved: H
22 | totalSfntSize: L
23 | totalCompressedSize: L
24 | majorVersion: H
25 | minorVersion: H
26 | metaOffset: L
27 | metaLength: L
28 | metaOrigLength: L
29 | privOffset: L
30 | privLength: L
31 | """
32 | woffHeaderSize = sstruct.calcsize(woffHeaderFormat)
33 |
34 | woffTransformedGlyfHeaderFormat = """
35 | > # big endian
36 | version: L
37 | numGlyphs: H
38 | indexFormat: H
39 | nContourStreamSize: L
40 | nPointsStreamSize: L
41 | flagStreamSize: L
42 | glyphStreamSize: L
43 | compositeStreamSize: L
44 | bboxStreamSize: L
45 | instructionStreamSize: L
46 | """
47 |
48 | woffTransformedGlyfHeader = dict(
49 | version=0,
50 | numGlyphs=0,
51 | indexFormat=0,
52 | nContourStreamSize=0,
53 | nPointsStreamSize=0,
54 | flagStreamSize=0,
55 | glyphStreamSize=0,
56 | compositeStreamSize=0,
57 | bboxStreamSize=0,
58 | instructionStreamSize=0,
59 | )
60 |
61 | # ------------
62 | # Data Packing
63 | # ------------
64 |
65 | knownTableTags = (
66 | "cmap", "head", "hhea", "hmtx", "maxp", "name", "OS/2", "post", "cvt ",
67 | "fpgm", "glyf", "loca", "prep", "CFF ", "VORG", "EBDT", "EBLC", "gasp",
68 | "hdmx", "kern", "LTSH", "PCLT", "VDMX", "vhea", "vmtx", "BASE", "GDEF",
69 | "GPOS", "GSUB", "EBSC", "JSTF", "MATH", "CBDT", "CBLC", "COLR", "CPAL",
70 | "SVG ", "sbix", "acnt", "avar", "bdat", "bloc", "bsln", "cvar", "fdsc",
71 | "feat", "fmtx", "fvar", "gvar", "hsty", "just", "lcar", "mort", "morx",
72 | "opbd", "prop", "trak", "Zapf", "Silf", "Glat", "Gloc", "Feat", "Sill",
73 | )
74 |
75 | unknownTableTagFlag = 63
76 |
77 | transformedTables = ("glyf", "loca")
78 |
79 | def transformTable(font, tag, glyphBBox="", alt255UInt16=False):
80 | if tag == "head":
81 | font["head"].flags |= 1 << 11
82 |
83 | origData = font.getTableData(tag)
84 | transformedData = origData
85 |
86 | transform = False
87 | if (tag == "hmtx" and "glyf" in font) or (tag in transformedTables):
88 | transform = True
89 |
90 | if transform:
91 | if tag == "glyf":
92 | transformedData = transformGlyf(font, glyphBBox=glyphBBox, alt255UInt16=alt255UInt16)
93 | elif tag == "loca":
94 | transformedData = b""
95 | elif tag == "hmtx":
96 | transformedData = transformHmtx(font)
97 | else:
98 | assert False, "Unknown transformed table tag: %s" % tag
99 |
100 | #assert len(transformedData) < len(origData), (tag, len(transformedData), len(origData))
101 |
102 | return (origData, transformedData)
103 |
104 | def pack255UInt16(n, alternate=0):
105 | if n < 253:
106 | ret = struct.pack(">B", n)
107 | elif n < 506:
108 | ret = struct.pack(">BB", 255, n - 253)
109 | elif n < 762:
110 | if not alternate:
111 | ret = struct.pack(">BB", 254, n - 506)
112 | elif alternate == 1 or n > 508:
113 | ret = struct.pack(">BH", 253, n)
114 | elif alternate == 2:
115 | ret = struct.pack(">BB", 255, n - 253)
116 | else:
117 | ret = struct.pack(">BH", 253, n)
118 |
119 | return ret
120 |
121 | def packTriplet(x, y, onCurve):
122 | x = int(round(x))
123 | y = int(round(y))
124 | absX = abs(x)
125 | absY = abs(y)
126 | onCurveBit = 0
127 | xSignBit = 0
128 | ySignBit = 0
129 | if not onCurve:
130 | onCurveBit = 128
131 | if x > 0:
132 | xSignBit = 1
133 | if y > 0:
134 | ySignBit = 1
135 | xySignBits = xSignBit + 2 * ySignBit
136 |
137 | fmt = ">B"
138 | flags = b""
139 | glyphs = b""
140 | if x == 0 and absY < 1280:
141 | flags += struct.pack(fmt, onCurveBit + ((absY & 0xf00) >> 7) + ySignBit)
142 | glyphs += struct.pack(fmt, absY & 0xff)
143 | elif y == 0 and absX < 1280:
144 | flags += struct.pack(fmt, onCurveBit + 10 + ((absX & 0xf00) >> 7) + xSignBit)
145 | glyphs += struct.pack(fmt, absX & 0xff)
146 | elif absX < 65 and absY < 65:
147 | flags += struct.pack(fmt, onCurveBit + 20 + ((absX - 1) & 0x30) + (((absY - 1) & 0x30) >> 2) + xySignBits)
148 | glyphs += struct.pack(fmt, (((absX - 1) & 0xf) << 4) | ((absY - 1) & 0xf))
149 | elif absX < 769 and absY < 769:
150 | flags += struct.pack(fmt, onCurveBit + 84 + 12 * (((absX - 1) & 0x300) >> 8) + (((absY - 1) & 0x300) >> 6) + xySignBits)
151 | glyphs += struct.pack(fmt, (absX - 1) & 0xff)
152 | glyphs += struct.pack(fmt, (absY - 1) & 0xff)
153 | elif absX < 4096 and absY < 4096:
154 | flags += struct.pack(fmt, onCurveBit + 120 + xySignBits)
155 | glyphs += struct.pack(fmt, absX >> 4)
156 | glyphs += struct.pack(fmt, ((absX & 0xf) << 4) | (absY >> 8))
157 | glyphs += struct.pack(fmt, absY & 0xff)
158 | else:
159 | flags += struct.pack(fmt, onCurveBit + 124 + xySignBits)
160 | glyphs += struct.pack(fmt, absX >> 8)
161 | glyphs += struct.pack(fmt, absX & 0xff)
162 | glyphs += struct.pack(fmt, absY >> 8)
163 | glyphs += struct.pack(fmt, absY & 0xff)
164 |
165 | return (flags, glyphs)
166 |
167 | def transformGlyf(font, glyphBBox="", alt255UInt16=False):
168 | glyf = font["glyf"]
169 | head = font["head"]
170 |
171 | nContourStream = b""
172 | nPointsStream = b""
173 | flagStream = b""
174 | glyphStream = b""
175 | compositeStream = b""
176 | bboxStream = b""
177 | instructionStream = b""
178 | bboxBitmap = []
179 | bboxBitmapStream = b""
180 |
181 | for i in range(4 * int((len(glyf.keys()) + 31) / 32)):
182 | bboxBitmap.append(0)
183 |
184 | for glyphName in glyf.glyphOrder:
185 | glyph = glyf[glyphName]
186 | glyphId = glyf.getGlyphID(glyphName)
187 |
188 | alternate255UInt16 = 0
189 |
190 | # nContourStream
191 | nContourStream += struct.pack(">h", glyph.numberOfContours)
192 |
193 | haveInstructions = False
194 |
195 | if glyph.numberOfContours == 0:
196 | if glyphBBox == "empty":
197 | bboxBitmap[glyphId >> 3] |= 0x80 >> (glyphId & 7)
198 | bboxStream += struct.pack(">hhhh", 0, 0, 0, 0)
199 | continue
200 | elif glyph.isComposite():
201 | # compositeStream
202 | more = True
203 | for i in range(len(glyph.components)):
204 | if i == len(glyph.components) - 1:
205 | haveInstructions = hasattr(glyph, "program")
206 | more = False
207 | compositeStream += glyph.components[i].compile(more, haveInstructions, glyf)
208 | else:
209 | # nPointsStream
210 | lastPointIndex = 0
211 | for i in range(glyph.numberOfContours):
212 | nPoints = glyph.endPtsOfContours[i] - lastPointIndex + (i == 0)
213 | data = pack255UInt16(nPoints, alternate=alternate255UInt16)
214 | if nPoints == 506 and alt255UInt16:
215 | num = [v for v in data]
216 | if alternate255UInt16 == 0:
217 | assert num == [254, 0]
218 | elif alternate255UInt16 == 1:
219 | assert num == [253, 1, 250]
220 | else:
221 | assert num == [255, 253]
222 | alternate255UInt16 += 1
223 | nPointsStream += data
224 | lastPointIndex = glyph.endPtsOfContours[i]
225 |
226 | # flagStream & glyphStream
227 | lastX = 0
228 | lastY = 0
229 | lastPointIndex = 0
230 | for i in range(glyph.numberOfContours):
231 | for j in range(lastPointIndex, glyph.endPtsOfContours[i] + 1):
232 | x, y = glyph.coordinates[j]
233 | onCurve = glyph.flags[j] & 0x01
234 | dx = x - lastX
235 | dy = y - lastY
236 | lastX = x
237 | lastY = y
238 | flags, data = packTriplet(dx, dy, onCurve)
239 | flagStream += flags
240 | glyphStream += data
241 | lastPointIndex = glyph.endPtsOfContours[i] + 1
242 | haveInstructions = True
243 |
244 | if haveInstructions:
245 | instructions = glyph.program.getBytecode()
246 | # instructionLength
247 | glyphStream += pack255UInt16(len(instructions), alternate=alt255UInt16)
248 |
249 | # instructionStream
250 | instructionStream += instructions
251 |
252 | writeBBox = False
253 | if glyph.isComposite():
254 | writeBBox = glyphBBox != "nocomposite"
255 | else:
256 | coords = glyph.getCoordinates(glyf)[0]
257 | oldBounds = (glyph.xMin, glyph.yMin, glyph.xMax, glyph.yMax)
258 | newBounds = calcIntBounds(coords)
259 | writeBBox = oldBounds != newBounds
260 |
261 | if writeBBox:
262 | # bboxBitmap
263 | bboxBitmap[glyphId >> 3] |= 0x80 >> (glyphId & 7)
264 |
265 | # bboxStream
266 | bboxStream += struct.pack(">hhhh", glyph.xMin, glyph.yMin, glyph.xMax, glyph.yMax)
267 |
268 | bboxBitmapStream = b"".join([struct.pack(">B", v) for v in bboxBitmap])
269 |
270 | header = deepcopy(woffTransformedGlyfHeader)
271 | header["numGlyphs"] = len(glyf.keys())
272 | header["indexFormat"] = head.indexToLocFormat
273 | header["nContourStreamSize"] = len(nContourStream)
274 | header["nPointsStreamSize"] = len(nPointsStream)
275 | header["flagStreamSize"] = len(flagStream)
276 | header["glyphStreamSize"] = len(glyphStream)
277 | header["compositeStreamSize"] = len(compositeStream)
278 | header["bboxStreamSize"] = len(bboxStream) + len(bboxBitmapStream)
279 | header["instructionStreamSize"] = len(instructionStream)
280 |
281 | data = sstruct.pack(woffTransformedGlyfHeaderFormat, header)
282 | data += nContourStream + nPointsStream + flagStream
283 | data += glyphStream + compositeStream
284 | data += bboxBitmapStream + bboxStream
285 | data += instructionStream
286 |
287 | return data
288 |
289 | def transformHmtx(font):
290 | glyf = font["glyf"]
291 | hhea = font["hhea"]
292 | hmtx = font["hmtx"]
293 | maxp = font["maxp"]
294 |
295 | origData = font.getTableData("hmtx")
296 |
297 | for name in hmtx.metrics:
298 | advance, lsb = hmtx.metrics[name]
299 | xMin = 0
300 | if hasattr(glyf[name], "xMin"):
301 | xMin = glyf[name].xMin
302 | if lsb != xMin:
303 | return origData
304 |
305 | hasLsb = False
306 | hasLeftSideBearing = False
307 |
308 | if hhea.numberOfHMetrics != maxp.numGlyphs:
309 | hasLeftSideBearing = True
310 |
311 | for index, name in enumerate(hmtx.metrics):
312 | if index >= hhea.numberOfHMetrics:
313 | break
314 | advance, lsb = hmtx.metrics[name]
315 | xMin = 0
316 | if hasattr(glyf[name], "xMin"):
317 | xMin = glyf[name].xMin
318 | if lsb != xMin:
319 | hasLsb = True
320 | break
321 |
322 | flags = 0
323 | if not hasLsb:
324 | flags |= 1 << 0
325 | if not hasLeftSideBearing:
326 | flags |= 1 << 1
327 |
328 | data = struct.pack(">B", flags)
329 | for index, name in enumerate(hmtx.metrics):
330 | if index >= hhea.numberOfHMetrics:
331 | break
332 | advance, lsb = hmtx.metrics[name]
333 | data += struct.pack(">H", advance)
334 |
335 | if hasLsb:
336 | for index, name in enumerate(hmtx.metrics):
337 | if index >= hhea.numberOfHMetrics:
338 | break
339 | advance, lsb = hmtx.metrics[name]
340 | data += struct.pack(">H", lsb)
341 |
342 | if hasLeftSideBearing:
343 | for index, name in enumerate(hmtx.metrics):
344 | if index < hhea.numberOfHMetrics:
345 | continue
346 | advance, lsb = hmtx.metrics[name]
347 | data += struct.pack(">H", lsb)
348 |
349 | assert len(data) < len(origData)
350 | return data
351 |
352 | def base128Size(n):
353 | size = 1;
354 | while n >= 128:
355 | size += 1
356 | n = n >> 7
357 | return size
358 |
359 | def packBase128(n, bug=False):
360 | size = base128Size(n)
361 | ret = b""
362 | if bug:
363 | ret += struct.pack(">B", 0x80)
364 | for i in range(size):
365 | b = (n >> (7 * (size - i - 1))) & 0x7f
366 | if i < size - 1:
367 | b = b | 0x80
368 | ret += struct.pack(">B", b)
369 | return ret
370 |
371 | def packTestHeader(header):
372 | return sstruct.pack(woffHeaderFormat, header)
373 |
374 | def _setTransformBits(flag, tranasform):
375 | if tranasform == 1:
376 | flag |= 1 << 6
377 | elif tranasform == 2:
378 | flag |= 1 << 7
379 | elif tranasform == 3:
380 | flag |= 1 << 6 | 1 << 7
381 | return flag
382 |
383 | def packTestDirectory(directory, knownTags=knownTableTags, skipTransformLength=False, isCollection=False, unsortGlyfLoca=False, Base128Bug=False):
384 | data = b""
385 | directory = [(entry["tag"], entry) for entry in directory]
386 | if not isCollection:
387 | directory = sorted(directory, key=lambda t: t[0])
388 | if unsortGlyfLoca:
389 | loca = None
390 | glyf = None
391 | for i, entry in enumerate(directory):
392 | if entry[0] == "loca": loca = i
393 | elif entry[0] == "glyf": glyf = i
394 | assert loca
395 | assert glyf
396 | directory.insert(glyf, directory.pop(loca))
397 | for tag, table in directory:
398 | transformFlag = table["transformFlag"]
399 | assert transformFlag <= 3
400 | if tag in knownTags:
401 | data += struct.pack(">B", _setTransformBits(knownTableTags.index(tag), transformFlag))
402 | else:
403 | data += struct.pack(">B", _setTransformBits(unknownTableTagFlag, transformFlag))
404 | data += struct.pack(">4s", bytes(tag, "utf-8"))
405 | data += packBase128(table["origLength"], bug=Base128Bug)
406 | transformed = False
407 | if tag in transformedTables:
408 | transformed = True
409 | if transformFlag == 3:
410 | transformed = False
411 | else:
412 | transformed = transformFlag != 0
413 |
414 | if transformed and not skipTransformLength:
415 | data += packBase128(table["transformLength"], bug=Base128Bug)
416 | return data
417 |
418 | def packTestCollectionHeader(header):
419 | return struct.pack(">L", header["version"]) + pack255UInt16(header["numFonts"])
420 |
421 | def packTestCollectionDirectory(directory):
422 | data = b""
423 | for entry in directory:
424 | data += pack255UInt16(entry["numTables"])
425 | data += struct.pack(">4s", entry["flavor"])
426 | for i in entry["index"]:
427 | data += pack255UInt16(entry["index"][i])
428 | return data
429 |
430 | def packTestMetadata(metadata, havePrivateData=False):
431 | origMetadata = metadata[0]
432 | compMetadata = metadata[1]
433 | if havePrivateData:
434 | compMetadata = padData(compMetadata)
435 | return compMetadata
436 |
437 | def packTestPrivateData(privateData):
438 | return privateData
439 |
--------------------------------------------------------------------------------
/generators/DecoderTestCaseGenerator.py:
--------------------------------------------------------------------------------
1 | """
2 | This script generates the decoder test cases. It will create a directory
3 | one level up from the directory containing this script called "Decoder".
4 | That directory will have the structure:
5 |
6 | /Format
7 | README.txt - information about how the tests were generated and how they should be modified
8 | /Tests
9 | testcaseindex.xht - index of all test cases
10 | test-case-name-number.otf/ttf - individual SFNT test case
11 | /resources
12 | index.css - index CSS file
13 |
14 | Within this script, each test case is generated with a call to the
15 | writeTest function. In this, SFNT data must be passed along with
16 | details about the data. This function will generate the SFNT
17 | and register the case in the suite index.
18 | """
19 |
20 | import os
21 | import shutil
22 | import glob
23 | import struct
24 | import zipfile
25 | from fontTools.misc import sstruct
26 | from fontTools.ttLib import TTFont, getSearchRange
27 | from fontTools.ttLib.sfnt import sfntDirectoryFormat, sfntDirectorySize, sfntDirectoryEntryFormat, sfntDirectoryEntrySize,\
28 | ttcHeaderFormat, ttcHeaderSize
29 | from testCaseGeneratorLib.defaultData import defaultSFNTTestData, defaultTestData
30 | from testCaseGeneratorLib.sfnt import packSFNT, getSFNTCollectionData, getWOFFCollectionData
31 | from testCaseGeneratorLib.paths import resourcesDirectory, decoderDirectory, decoderTestDirectory,\
32 | decoderResourcesDirectory, sfntTTFSourcePath
33 | from testCaseGeneratorLib.woff import packTestDirectory, packTestHeader
34 | from testCaseGeneratorLib.html import generateDecoderIndexHTML, expandSpecLinks
35 | from testCaseGeneratorLib.utilities import padData, calcPaddingLength, calcTableChecksum
36 | from testCaseGeneratorLib import sharedCases
37 | from testCaseGeneratorLib.sharedCases import *
38 |
39 | # ------------------
40 | # Directory Creation
41 | # (if needed)
42 | # ------------------
43 |
44 | if not os.path.exists(decoderDirectory):
45 | os.makedirs(decoderDirectory)
46 | if not os.path.exists(decoderTestDirectory):
47 | os.makedirs(decoderTestDirectory)
48 | if not os.path.exists(decoderResourcesDirectory):
49 | os.makedirs(decoderResourcesDirectory)
50 |
51 | # -------------------
52 | # Move HTML Resources
53 | # -------------------
54 |
55 | # index css
56 | destPath = os.path.join(decoderResourcesDirectory, "index.css")
57 | if os.path.exists(destPath):
58 | os.remove(destPath)
59 | shutil.copy(os.path.join(resourcesDirectory, "index.css"), destPath)
60 |
61 | # ---------------
62 | # Test Case Index
63 | # ---------------
64 |
65 | # As the tests are generated a log will be kept.
66 | # This log will be translated into an index after
67 | # all of the tests have been written.
68 |
69 | indexNote = """
70 | The tests in this suite represent SFNT data to be used for WOFF
71 | conversion without any alteration or correction.
72 | """.strip()
73 |
74 | roundTripNote = """
75 | These files are provided as test cases for checking that the
76 | result of converting to WOFF and back to SFNT results in a file
77 | that is functionaly equivalent to the original SFNT.
78 | """.strip()
79 |
80 | validationNote = """
81 | These files are provided as test cases for checking that the
82 | result of converting WOFF back to SFNT results in a file
83 | that confirms the OFF structure validity.
84 | """.strip()
85 |
86 | groupDefinitions = [
87 | # identifier, title, spec section, category note
88 | ("roundtrip", "Round-Trip Tests", None, roundTripNote),
89 | ("validation", "OFF Validation Tests", None, validationNote),
90 | ]
91 |
92 | testRegistry = {}
93 | for group in groupDefinitions:
94 | tag = group[0]
95 | testRegistry[tag] = []
96 |
97 | # -----------------
98 | # Test Case Writing
99 | # -----------------
100 |
101 | registeredIdentifiers = set()
102 | registeredTitles = set()
103 | registeredDescriptions = set()
104 |
105 | def writeTest(identifier, title, description, data, specLink=None, credits=[], roundTrip=False, flavor="CFF"):
106 | """
107 | This function generates all of the files needed by a test case and
108 | registers the case with the suite. The arguments:
109 |
110 | identifier: The identifier for the test case. The identifier must be
111 | a - separated sequence of group name (from the groupDefinitions
112 | listed above), test case description (arbitrary length) and a number
113 | to make the name unique. The number should be zero padded to a length
114 | of three characters (ie "001" instead of "1").
115 |
116 | title: A thorough, but not too long, title for the test case.
117 |
118 | description: A detailed statement about what the test case is proving.
119 |
120 | data: The complete binary data for either the WOFF, or both WOFF and SFNT.
121 |
122 | specLink: The anchor in the WOFF spec that the test case is testing.
123 |
124 | credits: A list of dictionaries defining the credits for the test case. The
125 | dictionaries must have this form:
126 |
127 | title="Name of the autor or reviewer",
128 | role="author or reviewer",
129 | link="mailto:email or http://contactpage"
130 |
131 | roundTrip: A boolean indicating if this is a round-trip test.
132 |
133 | flavor: The flavor of the WOFF data. The options are CFF or TTF.
134 | """
135 | print("Compiling %s..." % identifier)
136 | assert identifier not in registeredIdentifiers, "Duplicate identifier! %s" % identifier
137 | assert title not in registeredTitles, "Duplicate title! %s" % title
138 | assert description not in registeredDescriptions, "Duplicate description! %s" % description
139 | registeredIdentifiers.add(identifier)
140 | registeredTitles.add(title)
141 | registeredDescriptions.add(description)
142 |
143 | specLink = expandSpecLinks(specLink)
144 |
145 | # generate the SFNT
146 | if roundTrip:
147 | sfntPath = os.path.join(decoderTestDirectory, identifier)
148 | if flavor == "CFF":
149 | sfntPath += ".otf"
150 | else:
151 | sfntPath += ".ttf"
152 | f = open(sfntPath, "wb")
153 | f.write(data[1])
154 | f.close()
155 | data = data[0]
156 | woffPath = os.path.join(decoderTestDirectory, identifier) + ".woff2"
157 | f = open(woffPath, "wb")
158 | f.write(data)
159 | f.close()
160 |
161 | # register the test
162 | tag = identifier.split("-")[0]
163 | testRegistry[tag].append(
164 | dict(
165 | identifier=identifier,
166 | title=title,
167 | description=description,
168 | roundTrip=roundTrip,
169 | specLink=specLink
170 | )
171 | )
172 |
173 | def writeTestCollection(identifier, title, description, data, specLink=None, credits=[]):
174 | """
175 | This function generates all of the files needed by a test case and
176 | registers the case with the suite. The arguments:
177 |
178 | identifier: The base identifier for the test case. The identifier must be
179 | a - separated sequence of group name (from the groupDefinitions
180 | listed above) and test case description (arbitrary length).
181 |
182 | title: A thorough, but not too long, title for the test case.
183 |
184 | description: A detailed statement about what the test case is proving.
185 |
186 | data: A list of the complete binary data for WOFF.
187 |
188 | specLink: The anchor in the WOFF spec that the test case is testing.
189 |
190 | credits: A list of dictionaries defining the credits for the test case. The
191 | dictionaries must have this form:
192 |
193 | title="Name of the autor or reviewer",
194 | role="author or reviewer",
195 | link="mailto:email or http://contactpage"
196 | """
197 | assert description not in registeredDescriptions, "Duplicate description! %s" % description
198 | registeredDescriptions.add(description)
199 |
200 | specLink = expandSpecLinks(specLink)
201 | tag = identifier.split("-")[0]
202 |
203 | for i, d in enumerate(data):
204 | number = "%03d" % (i + 1)
205 | test_identifier = identifier + "-" + number
206 | test_title = title + " " + number
207 | print("Compiling %s..." % test_identifier)
208 |
209 | assert test_title not in registeredTitles, "Duplicate title! %s" % test_title
210 | assert test_identifier not in registeredIdentifiers, "Duplicate identifier! %s" % test_identifier
211 | registeredIdentifiers.add(test_identifier)
212 | registeredTitles.add(test_title)
213 |
214 | woffPath = os.path.join(decoderTestDirectory, test_identifier) + ".woff2"
215 | f = open(woffPath, "wb")
216 | f.write(d)
217 | f.close()
218 |
219 | # register the test
220 | testRegistry[tag].append(
221 | dict(
222 | identifier=test_identifier,
223 | title=test_title,
224 | description=description,
225 | roundTrip=False,
226 | specLink=specLink
227 | )
228 | )
229 |
230 | # -----
231 | # Tests
232 | # -----
233 |
234 | def makeCollectionOffsetTables1():
235 | paths = [sfntTTFSourcePath, sfntTTFSourcePath, sfntTTFSourcePath]
236 | woffdata = getWOFFCollectionData(paths)
237 | sfntdata = getSFNTCollectionData(paths)
238 |
239 | return woffdata, sfntdata
240 |
241 | writeTest(
242 | identifier="roundtrip-offset-tables-001",
243 | title="Font Collection Offset Tables",
244 | description="TTF flavored font collection to verify that the output file has the same number of font entries in the font collection and that each font entry is represented by the same set of font tables as the input file.",
245 | roundTrip=True,
246 | flavor="TTF",
247 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
248 | specLink="#conform-mustRestoreCollectionOffsetTables",
249 | data=makeCollectionOffsetTables1()
250 | )
251 |
252 | def makeFixCollection1():
253 | paths = [sfntTTFSourcePath, sfntTTFSourcePath, sfntTTFSourcePath]
254 | woffdata = getWOFFCollectionData(paths)
255 | sfntdata = getSFNTCollectionData(paths, DSIG=True)
256 |
257 | return woffdata, sfntdata
258 |
259 | writeTest(
260 | identifier="roundtrip-collection-dsig-001",
261 | title="Font Collection With DSIG table",
262 | description="TTF flavored font collection with DSIG table to verify that the output file has the same number of font entries in the font collection and that DSIG table entry was deleted. Check the font collection header format and if it's set to version 2.0 make sure that it was either modified to have the TTC Header fields {ulDsigTag, ulDsigLength, ulDsigOffset} set to zeros, or that TTC Header was converted to version 1.0.",
263 | roundTrip=True,
264 | flavor="TTF",
265 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
266 | specLink="#conform-mustFixCollection",
267 | data=makeFixCollection1()
268 | )
269 |
270 | def makeCollectionOrder1():
271 | paths = [sfntTTFSourcePath, sfntTTFSourcePath, sfntTTFSourcePath]
272 | woffdata = getWOFFCollectionData(paths, reverseNames=True)
273 | sfntdata = getSFNTCollectionData(paths, reverseNames=True)
274 |
275 | return woffdata, sfntdata
276 |
277 | writeTest(
278 | identifier="roundtrip-collection-order-001",
279 | title="Font Collection With Unsorted Fonts",
280 | description="TTF flavored font collection with fonts not in alphabetical order. The encoder/decoder must keep the original font order in the collection.",
281 | roundTrip=True,
282 | flavor="TTF",
283 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
284 | specLink="#conform-mustRestoreFontOrder",
285 | data=makeCollectionOrder1()
286 | )
287 |
288 | def makeHmtxLSB1():
289 | woffdata = makeHmtxTransform1()
290 | sfntdata = makeLSB1()
291 |
292 | return woffdata, sfntdata
293 |
294 | writeTest(
295 | identifier="roundtrip-hmtx-lsb-001",
296 | title="Font With Hmtx Table",
297 | description="TTF flavored font with hmtx table. The encoder/decoder must keep he same 'hmtx' table entries as it was encoded in the original input file.",
298 | roundTrip=True,
299 | flavor="TTF",
300 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
301 | specLink="#conform-mustReconstructLSBs",
302 | data=makeHmtxLSB1(),
303 | )
304 |
305 | def makeGlyfWithOverlaps():
306 | sfntdata = makeGlyfOverlapBitmapSFNT()
307 | woffdata = makeGlyfOverlapBitmap()
308 | return woffdata, sfntdata
309 |
310 | def makeGlyfWithNoOverlaps():
311 | sfntdata = makeGlyfNoOverlapBitmapSFNT()
312 | woffdata = makeGlyfNoOverlapBitmap()
313 | return woffdata, sfntdata
314 |
315 | writeTest(
316 | identifier="roundtrip-glyf-overlaps-001",
317 | title="Font with Overlap Bitmap",
318 | description="TTF flavored font with glyphs that have the overlap simple bit set. The encoder/decoder must keep the bits the same.",
319 | roundTrip=True,
320 | flavor="TTF",
321 | credits=[dict(title="Garret Rieger", role="author")],
322 | specLink="#conform-hasOverlap #conform-mustCheckOptionsFlag0 #conform-mustReconstructOverlap",
323 | data=makeGlyfWithOverlaps(),
324 | )
325 |
326 | writeTest(
327 | identifier="roundtrip-glyf-overlaps-002",
328 | title="Font with No Overlap Bitmap",
329 | description=("TTF flavored font with no glyphs that have the overlap simple bit set. "
330 | "The encoder/decoder must keep the bits the same."),
331 | roundTrip=True,
332 | flavor="TTF",
333 | credits=[dict(title="Garret Rieger", role="author")],
334 | specLink="#conform-noOverlap #conform-mustZeroOverlap",
335 | data=makeGlyfWithNoOverlaps(),
336 | )
337 |
338 | writeTest(
339 | identifier="validation-loca-format-001",
340 | title=makeValidLoca1Title,
341 | description=makeValidLoca1Description,
342 | credits=makeValidLoca1Credits,
343 | roundTrip=False,
344 | flavor="TTF",
345 | specLink="#conform-mustRecordLocaOffsets",
346 | data=makeValidLoca1()
347 | )
348 |
349 | writeTest(
350 | identifier="validation-loca-format-002",
351 | title=makeValidLoca2Title,
352 | description=makeValidLoca2Description,
353 | credits=makeValidLoca2Credits,
354 | roundTrip=False,
355 | flavor="TTF",
356 | specLink="#conform-mustRecordLocaOffsets",
357 | data=makeValidLoca2()
358 | )
359 |
360 | writeTest(
361 | identifier="validation-checksum-001",
362 | title="WOFF Checksum Calculation",
363 | description="Valid CFF flavored WOFF file, the output file is put through an OFF validator to check the validity of table checksums.",
364 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
365 | roundTrip=False,
366 | specLink="#conform-mustCalculateCheckSum",
367 | data=makeValidWOFF1()
368 | )
369 |
370 | writeTest(
371 | identifier="validation-checksum-002",
372 | title="WOFF Head Checksum Calculation",
373 | description="Valid CFF flavored WOFF file, the output file is put through an OFF validator to check the validity of head table checkSumAdjustment.",
374 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
375 | roundTrip=False,
376 | specLink="#conform-mustRecalculateHeadCheckSum",
377 | data=makeValidWOFF1()
378 | )
379 |
380 | metadata = [
381 | "schema-vendor-001", "schema-vendor-002", "schema-vendor-003",
382 | "schema-vendor-006", "schema-vendor-007", "schema-vendor-009",
383 | "schema-credits-002", "schema-credit-001", "schema-credit-002",
384 | "schema-credit-003", "schema-credit-005", "schema-credit-006",
385 | "schema-credit-008", "schema-description-001", "schema-description-002",
386 | "schema-description-003", "schema-description-004",
387 | "schema-description-005", "schema-description-006",
388 | "schema-description-007", "schema-license-001", "schema-license-002",
389 | "schema-license-003", "schema-license-004", "schema-license-005",
390 | "schema-license-006", "schema-license-007", "schema-license-008",
391 | "schema-copyright-001", "schema-copyright-002", "schema-copyright-003",
392 | "schema-copyright-004", "schema-copyright-005", "schema-trademark-002",
393 | "schema-trademark-003", "schema-trademark-004", "schema-trademark-005",
394 | "schema-licensee-001", "schema-extension-001", "schema-extension-002",
395 | "schema-extension-003", "schema-extension-004", "schema-extension-005",
396 | "schema-extension-006", "schema-extension-007", "schema-extension-012",
397 | "schema-extension-013", "schema-extension-014", "schema-extension-015",
398 | "schema-extension-016", "schema-extension-018", "schema-extension-021",
399 | "schema-extension-022", "schema-extension-023", "schema-extension-024",
400 | "schema-extension-025", "schema-extension-026", "schema-extension-027",
401 | "schema-extension-033", "schema-extension-034", "schema-extension-035",
402 | "schema-extension-036", "schema-extension-037", "schema-extension-039",
403 | "schema-extension-042", "schema-extension-043", "schema-extension-044",
404 | "schema-extension-045", "schema-extension-046", "schema-extension-048",
405 | "encoding-001", "encoding-004", "encoding-005", "schema-metadata-001",
406 | "schema-uniqueid-001", "schema-uniqueid-002", "schema-credits-001",
407 | "schema-description-013", "schema-description-014",
408 | "schema-description-016", "schema-description-019",
409 | "schema-description-020", "schema-description-021",
410 | "schema-description-022", "schema-description-023",
411 | "schema-description-025", "schema-description-026",
412 | "schema-description-027", "schema-description-028",
413 | "schema-description-029", "schema-description-030",
414 | "schema-description-032", "schema-license-010", "schema-license-014",
415 | "schema-license-015", "schema-license-017", "schema-license-020",
416 | "schema-license-021", "schema-license-022", "schema-license-023",
417 | "schema-license-024", "schema-license-026", "schema-license-027",
418 | "schema-license-028", "schema-license-029", "schema-license-030",
419 | "schema-license-031", "schema-license-033", "schema-copyright-011",
420 | "schema-copyright-012", "schema-copyright-014", "schema-copyright-017",
421 | "schema-copyright-018", "schema-copyright-019", "schema-copyright-020",
422 | "schema-copyright-021", "schema-copyright-023", "schema-copyright-024",
423 | "schema-copyright-025", "schema-copyright-026", "schema-copyright-027",
424 | "schema-copyright-028", "schema-copyright-030", "schema-trademark-001",
425 | "schema-trademark-011", "schema-trademark-012", "schema-trademark-014",
426 | "schema-trademark-017", "schema-trademark-018", "schema-trademark-019",
427 | "schema-trademark-020", "schema-trademark-021", "schema-trademark-023",
428 | "schema-trademark-024", "schema-trademark-025", "schema-trademark-026",
429 | "schema-trademark-027", "schema-trademark-028", "schema-trademark-030",
430 | "schema-licensee-004", "schema-licensee-005", "schema-licensee-007",
431 | ]
432 |
433 | OFFData = [
434 | makeValidWOFF1(), makeValidWOFF2(), makeValidWOFF3(), makeValidWOFF4(),
435 | makeValidWOFF5(), makeValidWOFF6(), makeValidWOFF7(), makeValidWOFF8(),
436 | makeLocaSizeTest1(), makeLocaSizeTest2(), makeLocaSizeTest3(),
437 | makeGlyfBBox1()
438 | ]
439 |
440 | for identifier in metadata:
441 | parts = identifier.split("-")
442 | number = int(parts[-1])
443 | group = parts[:-1]
444 | group = [i.title() for i in group]
445 | group = "".join(group)
446 | importBase = "metadata" + group + str(number)
447 | metadata = getattr(sharedCases, importBase + "Metadata")
448 | OFFData.append(makeMetadataTest(metadata)[0])
449 |
450 | writeTestCollection(
451 | identifier="validation-off",
452 | title="Valid WOFF File",
453 | description="Valid WOFF file from the file format tests, the decoded file should run through a font validator to confirm the OFF structure validity.",
454 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
455 | specLink="#conform-mustProduceOFF",
456 | data=OFFData
457 | )
458 |
459 | # ------------------
460 | # Generate the Index
461 | # ------------------
462 |
463 | print("Compiling index...")
464 |
465 | testGroups = []
466 |
467 | for tag, title, url, note in groupDefinitions:
468 | group = dict(title=title, url=url, testCases=testRegistry[tag], note=note)
469 | testGroups.append(group)
470 |
471 | generateDecoderIndexHTML(directory=decoderTestDirectory, testCases=testGroups, note=indexNote)
472 |
473 | # ----------------
474 | # Generate the zip
475 | # ----------------
476 |
477 | print("Compiling zip file...")
478 |
479 | zipPath = os.path.join(decoderTestDirectory, "DecoderTestFonts.zip")
480 | if os.path.exists(zipPath):
481 | os.remove(zipPath)
482 |
483 | allBinariesZip = zipfile.ZipFile(zipPath, "w")
484 |
485 | sfntPattern = os.path.join(decoderTestDirectory, "*.*tf")
486 | woffPattern = os.path.join(decoderTestDirectory, "*.woff2")
487 | filesOnDisk = glob.glob(sfntPattern) + glob.glob(woffPattern)
488 | for path in filesOnDisk:
489 | ext = os.path.splitext(path)[1]
490 | assert ext in (".otf", ".ttf", ".woff2")
491 | allBinariesZip.write(path, os.path.basename(path))
492 |
493 | allBinariesZip.close()
494 |
495 | # ---------------------
496 | # Generate the Manifest
497 | # ---------------------
498 |
499 | print("Compiling manifest...")
500 |
501 | manifest = []
502 |
503 | for tag, title, url, note in groupDefinitions:
504 | for testCase in testRegistry[tag]:
505 | identifier = testCase["identifier"]
506 | title = testCase["title"]
507 | assertion = testCase["description"]
508 | links = "#" + testCase["specLink"].split("#")[-1]
509 | # XXX force the chapter onto the links
510 | links = "#TableDirectory," + links
511 | flags = ""
512 | credits = ""
513 | # format the line
514 | line = "%s\t%s\t%s\t%s\t%s\t%s" % (
515 | identifier, # id
516 | "", # reference
517 | title, # title
518 | flags, # flags
519 | links, # links
520 | assertion # assertion
521 | )
522 | # store
523 | manifest.append(line)
524 |
525 | path = os.path.join(decoderDirectory, "manifest.txt")
526 | if os.path.exists(path):
527 | os.remove(path)
528 | f = open(path, "w")
529 | f.write("\n".join(manifest))
530 | f.close()
531 |
532 | # -----------------------
533 | # Check for Unknown Files
534 | # -----------------------
535 |
536 | otfPattern = os.path.join(decoderTestDirectory, "*.otf")
537 | ttfPattern = os.path.join(decoderTestDirectory, "*.ttf")
538 | woffPattern = os.path.join(decoderTestDirectory, "*.woff2")
539 | filesOnDisk = glob.glob(otfPattern) + glob.glob(ttfPattern) + glob.glob(woffPattern)
540 |
541 | for path in filesOnDisk:
542 | identifier = os.path.basename(path)
543 | identifier = identifier.split(".")[0]
544 | if identifier not in registeredIdentifiers:
545 | print("Unknown file:", path)
546 |
--------------------------------------------------------------------------------
/generators/testCaseGeneratorLib/html.py:
--------------------------------------------------------------------------------
1 | """
2 | Test case HTML generator.
3 | """
4 |
5 | import os
6 | import html
7 |
8 | from testCaseGeneratorLib.paths import userAgentTestResourcesDirectory
9 |
10 | # ------------------------
11 | # Specification URLs
12 | # This is used frequently.
13 | # ------------------------
14 |
15 | specificationURL = "http://dev.w3.org/webfonts/WOFF2/spec/"
16 | woff1SpecificationURL = "http://www.w3.org/TR/WOFF/"
17 |
18 | # -------------------
19 | # Do not edit warning
20 | # -------------------
21 |
22 | doNotEditWarning = ""
23 |
24 | # ------------------
25 | # SFNT Display Tests
26 | # ------------------
27 |
28 | testPassCharacter = "P"
29 | testFailCharacter = "F"
30 | refPassCharacter = testPassCharacter
31 |
32 | testCSS = """
33 | @import url("support/test-fonts.css");
34 | @font-face {
35 | font-family: "WOFF Test";
36 | src: url("%s/%s.woff2") format("woff2");
37 | }
38 | body {
39 | font-size: 20px;
40 | }
41 | pre {
42 | font-size: 12px;
43 | }
44 | .test {
45 | font-family: "WOFF Test", "WOFF Test %s Fallback";
46 | font-size: 200px;
47 | margin-top: 50px;
48 | }
49 | """.strip()
50 |
51 | refCSS = """
52 | @import url("support/test-fonts.css");
53 | body {
54 | font-size: 20px;
55 | }
56 | pre {
57 | font-size: 12px;
58 | }
59 | .test {
60 | font-family: "WOFF Test %s Reference";
61 | font-size: 200px;
62 | margin-top: 50px;
63 | }
64 | """.strip()
65 |
66 | def escapeAttributeText(text):
67 | text = html.escape(text)
68 | replacements = {
69 | "\"" : """,
70 | }
71 | for before, after in replacements.items():
72 | text = text.replace(before, after)
73 | return text
74 |
75 | def _generateSFNTDisplayTestHTML(
76 | css, bodyCharacter,
77 | fileName=None, refFileName=None, flavor=None,
78 | title=None, specLinks=[], assertion=None,
79 | credits=[], flags=[],
80 | metadataIsValid=None,
81 | metadataToDisplay=None,
82 | extraSFNTNotes=[],
83 | extraMetadataNotes=[],
84 | chapterURL=None
85 | ):
86 | assert flavor is not None
87 | assert title is not None
88 | assert specLinks
89 | assert assertion is not None
90 | html_string = [
91 | "",
92 | doNotEditWarning,
93 | ""
94 | ]
95 | # head
96 | html_string.append("\t
")
97 | ## encoding
98 | s = "\t\t "
99 | html_string.append(s)
100 | ## title
101 | s = "\t\tWOFF Test: %s " % html.escape(title)
102 | html_string.append(s)
103 | ## author
104 | for credit in credits:
105 | role = credit.get("role")
106 | title = credit.get("title")
107 | link = credit.get("link")
108 | date = credit.get("date")
109 | s = "\t\t " % (role, title, link)
110 | if date:
111 | s += " " % date
112 | html_string.append(s)
113 | ## link
114 | assert chapterURL is not None
115 | s = "\t\t " % chapterURL
116 | html_string.append(s)
117 | for link in specLinks:
118 | s = "\t\t " % link
119 | html_string.append(s)
120 | ## reviewer
121 | s = '\t\t '
122 | html_string.append(s)
123 | # matching reference
124 | if refFileName:
125 | s = '\t\t ' % refFileName
126 | html_string.append(s)
127 | ## flags
128 | if flags:
129 | s = "\t\t " % " ".join(flags)
130 | html_string.append(s)
131 | ## assertion
132 | s = "\t\t " % escapeAttributeText(assertion)
133 | html_string.append(s)
134 | ## css
135 | html_string.append("\t\t")
139 | ## close
140 | html_string.append("\t")
141 | # body
142 | html_string.append("\t")
143 | ## note
144 | if metadataIsValid is None:
145 | s = "\t\tTest passes if the word PASS appears below.
"
146 | else:
147 | if not metadataIsValid:
148 | s = "\t\tIf the UA does not display WOFF metadata, the test passes if the word PASS appears below.
\n"
149 | s += "\t\tThe Extended Metadata Block is not valid and must not be displayed. If the UA does display it, the test fails.
"
150 | else:
151 | s = "\t\tTest passes if the word PASS appears below.
\n"
152 | s += "\t\tThe Extended Metadata Block is valid and may be displayed to the user upon request.
"
153 | html_string.append(s)
154 | # extra notes
155 | for note in extraSFNTNotes:
156 | s = "\t\t%s
" % html.escape(note)
157 | html_string.append(s)
158 | for note in extraMetadataNotes:
159 | s = "\t\t%s
" % html.escape(note)
160 | html_string.append(s)
161 | ## test case
162 | s = "\t\t%s
" % bodyCharacter
163 | html_string.append(s)
164 | ## show metadata
165 | if metadataToDisplay:
166 | s = "\t\tThe XML contained in the Extended Metadata Block is below.
"
167 | html_string.append(s)
168 | html_string.append("\t\t")
169 | html_string.append(html.escape(metadataToDisplay))
170 | html_string.append("\t\t ")
171 | ## close
172 | html_string.append("\t")
173 | # close
174 | html_string.append("")
175 | # finalize
176 | html_string = "\n".join(html_string)
177 | return html_string
178 |
179 | def generateSFNTDisplayTestHTML(
180 | fileName=None, directory=None, flavor=None, title=None,
181 | sfntDisplaySpecLink=None, metadataDisplaySpecLink=None, assertion=None,
182 | credits=[], flags=[],
183 | shouldDisplay=None, metadataIsValid=None, metadataToDisplay=None,
184 | extraSFNTNotes=[], extraMetadataNotes=[],
185 | chapterURL=None
186 | ):
187 | bodyCharacter = testFailCharacter
188 | if shouldDisplay:
189 | bodyCharacter = testPassCharacter
190 | dirName = os.path.basename(userAgentTestResourcesDirectory)
191 | css = testCSS % (dirName, fileName, flavor)
192 | specLinks = []
193 | if sfntDisplaySpecLink:
194 | specLinks += sfntDisplaySpecLink
195 | if metadataDisplaySpecLink:
196 | specLinks.append(metadataDisplaySpecLink)
197 | html_string = _generateSFNTDisplayTestHTML(
198 | css, bodyCharacter,
199 | fileName=fileName,
200 | refFileName='%s-ref.xht' % fileName,
201 | flavor=flavor,
202 | title=title,
203 | specLinks=specLinks,
204 | assertion=assertion,
205 | credits=credits, flags=flags,
206 | metadataIsValid=metadataIsValid,
207 | metadataToDisplay=metadataToDisplay,
208 | extraSFNTNotes=extraSFNTNotes,
209 | extraMetadataNotes=extraMetadataNotes,
210 | chapterURL=chapterURL
211 | )
212 | # write the file
213 | path = os.path.join(directory, fileName) + ".xht"
214 | f = open(path, "w")
215 | f.write(html_string)
216 | f.close()
217 |
218 | def generateSFNTDisplayRefHTML(
219 | fileName=None, directory=None, flavor=None, title=None,
220 | sfntDisplaySpecLink=None, metadataDisplaySpecLink=None,
221 | assertion=None, credits=[], flags=[],
222 | shouldDisplay=None, metadataIsValid=None, metadataToDisplay=None,
223 | extraSFNTNotes=[], extraMetadataNotes=[],
224 | chapterURL=None
225 | ):
226 | bodyCharacter = refPassCharacter
227 | css = refCSS % flavor
228 | specLinks = []
229 | if sfntDisplaySpecLink:
230 | specLinks += sfntDisplaySpecLink
231 | if metadataDisplaySpecLink:
232 | specLinks.append(metadataDisplaySpecLink)
233 | html_string = _generateSFNTDisplayTestHTML(
234 | css, bodyCharacter,
235 | fileName=fileName, flavor=flavor,
236 | title=title,
237 | specLinks=specLinks,
238 | assertion=assertion,
239 | credits=credits, flags=flags,
240 | metadataIsValid=metadataIsValid,
241 | metadataToDisplay=metadataToDisplay,
242 | extraSFNTNotes=extraSFNTNotes,
243 | extraMetadataNotes=extraMetadataNotes,
244 | chapterURL=chapterURL
245 | )
246 | # write the file
247 | path = os.path.join(directory, fileName) + "-ref.xht"
248 | f = open(path, "w")
249 | f.write(html_string)
250 | f.close()
251 |
252 | def poorManMath(text):
253 | import re
254 | return re.sub(r"\^\{(.*.)\}", r"\1 ", text)
255 |
256 | def generateSFNTDisplayIndexHTML(directory=None, testCases=[]):
257 | testCount = sum([len(group["testCases"]) for group in testCases])
258 | html_string = [
259 | "",
260 | doNotEditWarning,
261 | "",
262 | "\t",
263 | "\t\tWOFF 2.0: User Agent Test Suite ",
264 | "\t\t",
267 | "\t",
268 | "\t",
269 | "\t\tWOFF 2.0: User Agent Test Suite (%d tests) " % testCount
270 | ]
271 | # add the test groups
272 | for group in testCases:
273 | title = group["title"]
274 | title = html.escape(title)
275 | # write the group header
276 | html_string.append("")
277 | html_string.append("\t\t%s " % title)
278 | # write the individual test cases
279 | for test in group["testCases"]:
280 | identifier = test["identifier"]
281 | title = test["title"]
282 | title = html.escape(title)
283 | title = poorManMath(title)
284 | assertion = test["assertion"]
285 | assertion = html.escape(assertion)
286 | assertion = poorManMath(assertion)
287 | sfntExpectation = test["sfntExpectation"]
288 | if sfntExpectation:
289 | sfntExpectation = "Display"
290 | else:
291 | sfntExpectation = "Reject"
292 | sfntURL = test["sfntURL"]
293 | metadataExpectation = test["metadataExpectation"]
294 | if metadataExpectation is None:
295 | metadataExpectation = "None"
296 | elif metadataExpectation:
297 | metadataExpectation = "Display"
298 | else:
299 | metadataExpectation = "Reject"
300 | metadataURL = test["metadataURL"]
301 | # start the test case div
302 | html_string.append("\t\t" % identifier)
303 | # start the overview div
304 | html_string.append("\t\t\t
")
305 | # title
306 | html_string.append("\t\t\t\t
" % (identifier, identifier, title))
307 | # assertion
308 | html_string.append("\t\t\t\t
%s
" % assertion)
309 | # close the overview div
310 | html_string.append("\t\t\t
")
311 | # start the details div
312 | html_string.append("\t\t\t
")
313 | # start the pages div
314 | html_string.append("\t\t\t\t
")
315 | # test page
316 | html_string.append("\t\t\t\t\t
Test
" % identifier)
317 | # reference page
318 | if test["hasReferenceRendering"]:
319 | html_string.append("\t\t\t\t\t
Reference Rendering
" % identifier)
320 | # close the pages div
321 | html_string.append("\t\t\t\t
")
322 | # start the expectations div
323 | html_string.append("\t\t\t\t
")
324 | # sfnt expectation
325 | string = "SFNT Expectation: %s" % sfntExpectation
326 | if sfntURL:
327 | links = []
328 | for url in sfntURL:
329 | if "#" in url:
330 | url = "
%s " % (url, url.split("#")[-1])
331 | links.append(url)
332 | else:
333 | url = "
documentation " % url
334 | links.append(url)
335 | string += " (%s)" % " ".join(links)
336 | html_string.append("\t\t\t\t\t
%s
" % string)
337 | # metadata expectation
338 | string = "Metadata Expectation: %s" % metadataExpectation
339 | if metadataURL:
340 | if "#" in metadataURL:
341 | s = "(%s)" % metadataURL.split("#")[-1]
342 | else:
343 | s = "(documentation)"
344 | string += "
%s " % (metadataURL, s)
345 | html_string.append("\t\t\t\t\t
%s
" % string)
346 | # close the expectations div
347 | html_string.append("\t\t\t\t
")
348 | # close the details div
349 | html_string.append("\t\t\t
")
350 | # close the test case div
351 | html_string.append("\t\t
")
352 |
353 | # close body
354 | html_string.append("\t")
355 | # close html
356 | html_string.append("")
357 | # finalize
358 | html_string = "\n".join(html_string)
359 | # write
360 | path = os.path.join(directory, "testcaseindex.xht")
361 | f = open(path, "w")
362 | f.write(html_string)
363 | f.close()
364 |
365 | def generateFormatIndexHTML(directory=None, testCases=[]):
366 | testCount = sum([len(group["testCases"]) for group in testCases])
367 | html_string = [
368 | "",
369 | doNotEditWarning,
370 | "",
371 | "\t",
372 | "\t\tWOFF 2.0: Format Test Suite ",
373 | "\t\t",
376 | "\t",
377 | "\t",
378 | "\t\tWOFF 2.0: Format Test Suite (%d tests) " % testCount,
379 | ]
380 | # add a download note
381 | html_string.append("\t\t")
382 | html_string.append("\t\t\tThe files used in these test can be obtained individually
here or as a single zip file
here .")
383 | html_string.append("\t\t
")
384 | # add the test groups
385 | for group in testCases:
386 | title = group["title"]
387 | title = html.escape(title)
388 | # write the group header
389 | html_string.append("")
390 | html_string.append("\t\t%s " % title)
391 | # write the individual test cases
392 | for test in group["testCases"]:
393 | identifier = test["identifier"]
394 | title = test["title"]
395 | title = html.escape(title)
396 | description = test["description"]
397 | description = html.escape(description)
398 | valid = test["valid"]
399 | if valid:
400 | valid = "Yes"
401 | else:
402 | valid = "No"
403 | specLink = test["specLink"]
404 | # start the test case div
405 | html_string.append("\t\t" % identifier)
406 | # start the overview div
407 | html_string.append("\t\t\t
")
408 | # title
409 | html_string.append("\t\t\t\t
" % (identifier, identifier, title))
410 | # assertion
411 | html_string.append("\t\t\t\t
%s
" % description)
412 | # close the overview div
413 | html_string.append("\t\t\t
")
414 | # start the details div
415 | html_string.append("\t\t\t
")
416 | # validity
417 | string = "Valid:
%s " % (identifier, valid)
418 | html_string.append("\t\t\t\t\t
%s
" % string)
419 | # documentation
420 | if specLink is not None:
421 | links = specLink.split(' ')
422 |
423 | html_string.append("\t\t\t\t\t
")
424 | for link in links:
425 | name = 'Documentation'
426 | if '#' in link:
427 | name = link.split('#')[1]
428 | string = "\t\t\t\t\t\t%s " % (link, name)
429 | html_string.append(string)
430 | html_string.append("\t\t\t\t\t
")
431 |
432 | # close the details div
433 | html_string.append("\t\t\t
")
434 | # close the test case div
435 | html_string.append("\t\t
")
436 | # close body
437 | html_string.append("\t")
438 | # close html
439 | html_string.append("")
440 | # finalize
441 | html_string = "\n".join(html_string)
442 | # write
443 | path = os.path.join(directory, "testcaseindex.xht")
444 | f = open(path, "w")
445 | f.write(html_string)
446 | f.close()
447 |
448 | def generateAuthoringToolIndexHTML(directory=None, testCases=[], note=None):
449 | testCount = sum([len(group["testCases"]) for group in testCases])
450 | html_string = [
451 | "",
452 | doNotEditWarning,
453 | "",
454 | "\t",
455 | "\t\tWOFF 2.0: Authoring Tool Test Suite ",
456 | "\t\t",
459 | "\t",
460 | "\t",
461 | "\t\tWOFF 2.0: Authoring Tool Test Suite (%d tests) " % testCount,
462 | ]
463 | # add a download note
464 | html_string.append("\t\t")
465 | html_string.append("\t\t\tThe files used in these test can be obtained individually
here or as a single zip file
here .")
466 | html_string.append("\t\t
")
467 | # add the note
468 | if note:
469 | html_string.append("\t\t")
470 | for line in note.splitlines():
471 | html_string.append("\t\t\t" + line)
472 | html_string.append("\t\t
")
473 | # add the test groups
474 | for group in testCases:
475 | title = group["title"]
476 | title = html.escape(title)
477 | # write the group header
478 | html_string.append("")
479 | html_string.append("\t\t%s " % title)
480 | # write the group note
481 | note = group["note"]
482 | if note:
483 | html_string.append("\t\t")
484 | for line in note.splitlines():
485 | html_string.append("\t\t\t" + line)
486 | html_string.append("\t\t
")
487 | # write the individual test cases
488 | for test in group["testCases"]:
489 | identifier = test["identifier"]
490 | title = test["title"]
491 | title = html.escape(title)
492 | description = test["description"]
493 | description = html.escape(description)
494 | shouldConvert = test["shouldConvert"]
495 | if shouldConvert:
496 | shouldConvert = "Yes"
497 | else:
498 | shouldConvert = "No"
499 | specLink = test["specLink"]
500 | # start the test case div
501 | html_string.append("\t\t" % identifier)
502 | # start the overview div
503 | html_string.append("\t\t\t
")
504 | # title
505 | html_string.append("\t\t\t\t
" % (identifier, identifier, title))
506 | # assertion
507 | html_string.append("\t\t\t\t
%s
" % description)
508 | # close the overview div
509 | html_string.append("\t\t\t
")
510 | # start the details div
511 | html_string.append("\t\t\t
")
512 | # validity
513 | string = "Should Convert to WOFF:
%s " % (identifier, shouldConvert)
514 | html_string.append("\t\t\t\t\t
%s
" % string)
515 | # documentation
516 | if specLink is not None:
517 | links = specLink.split(' ')
518 |
519 | html_string.append("\t\t\t\t\t
")
520 | for link in links:
521 | name = 'Documentation'
522 | if '#' in link:
523 | name = link.split('#')[1]
524 | string = "\t\t\t\t\t\t%s " % (link, name)
525 | html_string.append(string)
526 | html_string.append("\t\t\t\t\t
")
527 |
528 | # close the details div
529 | html_string.append("\t\t\t
")
530 | # close the test case div
531 | html_string.append("\t\t
")
532 | # close body
533 | html_string.append("\t")
534 | # close html
535 | html_string.append("")
536 | # finalize
537 | html_string = "\n".join(html_string)
538 | # write
539 | path = os.path.join(directory, "testcaseindex.xht")
540 | f = open(path, "w")
541 | f.write(html_string)
542 | f.close()
543 |
544 | def generateDecoderIndexHTML(directory=None, testCases=[], note=None):
545 | testCount = sum([len(group["testCases"]) for group in testCases])
546 | html_string = [
547 | "",
548 | doNotEditWarning,
549 | "",
550 | "\t",
551 | "\t\tWOFF 2.0: Decoder Test Suite ",
552 | "\t\t",
555 | "\t",
556 | "\t",
557 | "\t\tWOFF 2.0: Decoder Test Suite (%d tests) " % testCount,
558 | ]
559 | # add a download note
560 | html_string.append("\t\t")
561 | html_string.append("\t\t\tThe files used in these test can be obtained individually
here or as a single zip file
here .")
562 | html_string.append("\t\t
")
563 | # add the note
564 | if note:
565 | html_string.append("\t\t")
566 | for line in note.splitlines():
567 | html_string.append("\t\t\t" + line)
568 | html_string.append("\t\t
")
569 | # add the test groups
570 | for group in testCases:
571 | title = group["title"]
572 | title = html.escape(title)
573 | # write the group header
574 | html_string.append("")
575 | html_string.append("\t\t%s " % title)
576 | # write the group note
577 | note = group["note"]
578 | if note:
579 | html_string.append("\t\t")
580 | for line in note.splitlines():
581 | html_string.append("\t\t\t" + line)
582 | html_string.append("\t\t
")
583 | # write the individual test cases
584 | for test in group["testCases"]:
585 | identifier = test["identifier"]
586 | title = test["title"]
587 | title = html.escape(title)
588 | description = test["description"]
589 | description = html.escape(description)
590 | roundTrip = test["roundTrip"]
591 | if roundTrip:
592 | roundTrip = "Yes"
593 | else:
594 | roundTrip = "No"
595 | specLink = test["specLink"]
596 | # start the test case div
597 | html_string.append("\t\t" % identifier)
598 | # start the overview div
599 | html_string.append("\t\t\t
")
600 | # title
601 | html_string.append("\t\t\t\t
" % (identifier, identifier, title))
602 | # assertion
603 | html_string.append("\t\t\t\t
%s
" % description)
604 | # close the overview div
605 | html_string.append("\t\t\t
")
606 | # start the details div
607 | html_string.append("\t\t\t
")
608 | # validity
609 | string = "Round-Trip Test:
%s " % (identifier, roundTrip)
610 | html_string.append("\t\t\t\t\t
%s
" % string)
611 | # documentation
612 | if specLink is not None:
613 | links = specLink.split(' ')
614 |
615 | html_string.append("\t\t\t\t\t
")
616 | for link in links:
617 | name = 'Documentation'
618 | if '#' in link:
619 | name = link.split('#')[1]
620 | string = "\t\t\t\t\t\t%s " % (link, name)
621 | html_string.append(string)
622 | html_string.append("\t\t\t\t\t
")
623 |
624 | # close the details div
625 | html_string.append("\t\t\t
")
626 | # close the test case div
627 | html_string.append("\t\t
")
628 | # close body
629 | html_string.append("\t")
630 | # close html
631 | html_string.append("")
632 | # finalize
633 | html_string = "\n".join(html_string)
634 | # write
635 | path = os.path.join(directory, "testcaseindex.xht")
636 | f = open(path, "w")
637 | f.write(html_string)
638 | f.close()
639 |
640 |
641 |
642 | def expandSpecLinks(links):
643 | """
644 | This function expands anchor-only references to fully qualified spec links.
645 | #name expands to #name. woff1:#name expands to
646 | #name.
647 |
648 | links: 0..N space-separated #anchor references
649 | """
650 | if links is None or len(links) == 0:
651 | links = ""
652 |
653 | specLinks = []
654 | for link in links.split(" "):
655 | if link.startswith("woff1:"):
656 | link = woff1SpecificationURL + link[6:]
657 | else:
658 | link = specificationURL + link
659 |
660 | specLinks.append(link)
661 |
662 | return " ".join(specLinks)
663 |
--------------------------------------------------------------------------------
/generators/AuthoringToolTestCaseGenerator.py:
--------------------------------------------------------------------------------
1 | """
2 | This script generates the authoring tool test cases. It will create a directory
3 | one level up from the directory containing this script called "AuthoringTool".
4 | That directory will have the structure:
5 |
6 | /Format
7 | README.txt - information about how the tests were generated and how they should be modified
8 | /Tests
9 | testcaseindex.xht - index of all test cases
10 | test-case-name-number.otf/ttf - individual SFNT test case
11 | /resources
12 | index.css - index CSS file
13 |
14 | Within this script, each test case is generated with a call to the
15 | writeTest function. In this, SFNT data must be passed along with
16 | details about the data. This function will generate the SFNT
17 | and register the case in the suite index.
18 | """
19 |
20 | import os
21 | import shutil
22 | import glob
23 | import struct
24 | import zipfile
25 | import brotli
26 | from fontTools.misc import sstruct
27 | from fontTools.pens.ttGlyphPen import TTGlyphPen
28 | from fontTools.ttLib import TTFont, getTableModule
29 | from fontTools.ttLib.sfnt import sfntDirectoryEntrySize
30 | from testCaseGeneratorLib.defaultData import defaultTestData, defaultSFNTTestData
31 | from testCaseGeneratorLib.sfnt import packSFNT, getSFNTData, getSFNTCollectionData, getTTFont
32 | from testCaseGeneratorLib.paths import resourcesDirectory, authoringToolDirectory, authoringToolTestDirectory,\
33 | authoringToolResourcesDirectory, sfntTTFSourcePath, sfntTTFCompositeSourcePath
34 | from testCaseGeneratorLib.html import generateAuthoringToolIndexHTML, expandSpecLinks
35 | from testCaseGeneratorLib.utilities import calcPaddingLength, calcTableChecksum
36 | from testCaseGeneratorLib.sharedCases import makeLSB1
37 | from testCaseGeneratorLib.sharedCases import makeGlyfOverlapBitmapSFNT, makeGlyfNoOverlapBitmapSFNT
38 |
39 | # ------------------
40 | # Directory Creation
41 | # (if needed)
42 | # ------------------
43 |
44 | if not os.path.exists(authoringToolDirectory):
45 | os.makedirs(authoringToolDirectory)
46 | if not os.path.exists(authoringToolTestDirectory):
47 | os.makedirs(authoringToolTestDirectory)
48 | if not os.path.exists(authoringToolResourcesDirectory):
49 | os.makedirs(authoringToolResourcesDirectory)
50 |
51 | # -------------------
52 | # Move HTML Resources
53 | # -------------------
54 |
55 | # index css
56 | destPath = os.path.join(authoringToolResourcesDirectory, "index.css")
57 | if os.path.exists(destPath):
58 | os.remove(destPath)
59 | shutil.copy(os.path.join(resourcesDirectory, "index.css"), destPath)
60 |
61 | # ---------------
62 | # Test Case Index
63 | # ---------------
64 |
65 | # As the tests are generated a log will be kept.
66 | # This log will be translated into an index after
67 | # all of the tests have been written.
68 |
69 | indexNote = """
70 | The tests in this suite represent SFNT data to be used for WOFF
71 | conversion without any alteration or correction. An authoring tool
72 | may allow the explicit or silent modification and/or correction of
73 | SFNT data. In such a case, the tests in this suite that are labeled
74 | as "should not convert" may be converted, so long as the problems
75 | in the files have been corrected. In that case, there is no longer
76 | any access to the "input font" as defined in the WOFF specification,
77 | so the bitwise identical tests should be skipped.
78 | """.strip()
79 |
80 | tableDataNote = """
81 | These files are valid SFNTs that excercise conversion of the table data.
82 | """.strip()
83 |
84 | tableDirectoryNote = """
85 | These files are valid SFNTs that excercise conversion of the table directory.
86 | """.strip()
87 |
88 | collectionNote = """
89 | These files are valid SFNTs that excercise conversion of font collections.
90 | """.strip()
91 |
92 | groupDefinitions = [
93 | # identifier, title, spec section, category note
94 | ("tabledirectory", "SFNT Table Directory Tests", expandSpecLinks("#DataTables"), tableDirectoryNote),
95 | ("tabledata", "SFNT Table Data Tests", expandSpecLinks("#DataTables"), tableDataNote),
96 | ("collection", "SFNT Collection Tests", expandSpecLinks("#DataTables"), collectionNote),
97 | ]
98 |
99 | testRegistry = {}
100 | for group in groupDefinitions:
101 | tag = group[0]
102 | testRegistry[tag] = []
103 |
104 | # -----------------
105 | # Test Case Writing
106 | # -----------------
107 |
108 | registeredIdentifiers = set()
109 | registeredTitles = set()
110 | registeredDescriptions = set()
111 |
112 | def writeTest(identifier, title, description, data, specLink=None, credits=[], shouldConvert=False, flavor="CFF"):
113 | """
114 | This function generates all of the files needed by a test case and
115 | registers the case with the suite. The arguments:
116 |
117 | identifier: The identifier for the test case. The identifier must be
118 | a - separated sequence of group name (from the groupDefinitions
119 | listed above), test case description (arbitrary length) and a number
120 | to make the name unique. The number should be zero padded to a length
121 | of three characters (ie "001" instead of "1").
122 |
123 | title: A thorough, but not too long, title for the test case.
124 |
125 | description: A detailed statement about what the test case is proving.
126 |
127 | data: The complete binary data for the SFNT.
128 |
129 | specLink: The anchor in the WOFF spec that the test case is testing.
130 |
131 | credits: A list of dictionaries defining the credits for the test case. The
132 | dictionaries must have this form:
133 |
134 | title="Name of the autor or reviewer",
135 | role="author or reviewer",
136 | link="mailto:email or http://contactpage"
137 |
138 | shouldConvert: A boolean indicating if the SFNT is valid enough for
139 | conversion to WOFF.
140 |
141 | flavor: The flavor of the WOFF data. The options are CFF or TTF.
142 | """
143 | print("Compiling %s..." % identifier)
144 | assert identifier not in registeredIdentifiers, "Duplicate identifier! %s" % identifier
145 | assert title not in registeredTitles, "Duplicate title! %s" % title
146 | assert description not in registeredDescriptions, "Duplicate description! %s" % description
147 | registeredIdentifiers.add(identifier)
148 | registeredTitles.add(title)
149 | registeredDescriptions.add(description)
150 |
151 | specLink = expandSpecLinks(specLink)
152 |
153 | # generate the SFNT
154 | sfntPath = os.path.join(authoringToolTestDirectory, identifier)
155 | if flavor == "CFF":
156 | sfntPath += ".otf"
157 | elif flavor == "TTF":
158 | sfntPath += ".ttf"
159 | elif flavor == "TTC":
160 | sfntPath += ".ttc"
161 | elif flavor == "OTC":
162 | sfntPath += ".otc"
163 | else:
164 | assert False, "Unknown flavor: %s" % flavor
165 | f = open(sfntPath, "wb")
166 | f.write(data)
167 | f.close()
168 |
169 | # register the test
170 | tag = identifier.split("-")[0]
171 | testRegistry[tag].append(
172 | dict(
173 | identifier=identifier,
174 | title=title,
175 | description=description,
176 | shouldConvert=shouldConvert,
177 | specLink=specLink
178 | )
179 | )
180 |
181 | # ---------------
182 | # Valid SFNT Data
183 | # ---------------
184 |
185 | def makeValidSFNT(flavor="CFF"):
186 | header, directory, tableData = defaultSFNTTestData(flavor=flavor)
187 | data = packSFNT(header, directory, tableData, flavor=flavor)
188 | return data
189 |
190 | # -----------
191 | # Compression
192 | # -----------
193 |
194 |
195 | # ----
196 | # DSIG
197 | # ----
198 |
199 | def makeDSIG(flavor="CFF"):
200 | header, directory, tableData = defaultSFNTTestData(flavor=flavor)
201 | # adjust the header
202 | header["numTables"] += 1
203 | # store the data
204 | data = b"\0" * 4
205 | tableData["DSIG"] = data
206 | # offset the directory entries
207 | for entry in directory:
208 | entry["offset"] += sfntDirectoryEntrySize
209 | # find the offset
210 | entries = [(entry["offset"], entry) for entry in directory]
211 | entry = max(entries)[1]
212 | offset = entry["offset"] + entry["length"]
213 | offset += calcPaddingLength(offset)
214 | # make the entry
215 | directory.append(
216 | dict(
217 | tag="DSIG",
218 | offset=offset,
219 | length=4,
220 | checksum=calcTableChecksum("DSIG", data)
221 | )
222 | )
223 | # compile
224 | data = packSFNT(header, directory, tableData, flavor=flavor)
225 | return data
226 |
227 | writeTest(
228 | identifier="tabledata-dsig-001",
229 | title="CFF SFNT With DSIG Table",
230 | description="The CFF flavored SFNT has a DSIG table. (Note: this is not a valid DSIG. It should not be used for testing anything other than the presence of the table after the conversion process.) The process of converting to WOFF should remove the DSIG table.",
231 | shouldConvert=True,
232 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
233 | specLink="#conform-mustRemoveDSIG",
234 | data=makeDSIG()
235 | )
236 |
237 | writeTest(
238 | identifier="tabledata-dsig-002",
239 | title="TTF SFNT With DSIG Table",
240 | description="The TFF flavored SFNT has a DSIG table. (Note: this is not a valid DSIG. It should not be used for testing anything other than the presence of the table after the conversion process.) The process of converting to WOFF should remove the DSIG table.",
241 | shouldConvert=True,
242 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
243 | specLink="#conform-mustRemoveDSIG",
244 | data=makeDSIG(flavor="TTF"),
245 | flavor="TTF"
246 | )
247 |
248 | # --------------------
249 | # Bit 11 of head flags
250 | # --------------------
251 |
252 | writeTest(
253 | identifier="tabledata-bit11-001",
254 | title="Valid CFF SFNT For Bit 11",
255 | description="The bit 11 of the head table flags must be set for CFF flavored SFNT.",
256 | shouldConvert=True,
257 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
258 | specLink="#conform-mustSetBit11",
259 | data=makeValidSFNT()
260 | )
261 |
262 | writeTest(
263 | identifier="tabledata-bit11-002",
264 | title="Valid TTF SFNT For Bit 11",
265 | description="The bit 11 of the head table flags must be set for TTF flavored SFNT.",
266 | shouldConvert=True,
267 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
268 | specLink="#conform-mustSetBit11",
269 | data=makeValidSFNT(flavor="TTF"),
270 | flavor="TTF"
271 | )
272 |
273 | # ---------------
274 | # Transformations
275 | # ---------------
276 |
277 | def makeGlyfBBox1(calcBBoxes=True, composite=False):
278 | font = getTTFont(sfntTTFSourcePath, recalcBBoxes=calcBBoxes)
279 | glyf = font["glyf"]
280 | hmtx = font["hmtx"]
281 | for name in ("bbox1", "bbox2"):
282 | pen = TTGlyphPen(None)
283 | if name == "bbox1":
284 | pen.moveTo((0, 0))
285 | pen.lineTo((0, 1000))
286 | pen.lineTo((1000, 1000))
287 | pen.lineTo((1000, 0))
288 | pen.closePath()
289 | else:
290 | pen.moveTo((0, 0))
291 | pen.qCurveTo((500, 750), (600, 500), (500, 250), (0, 0))
292 | pen.closePath()
293 | glyph = pen.glyph()
294 | if not calcBBoxes:
295 | glyph.recalcBounds(glyf)
296 | glyph.xMax -= 100
297 | glyf.glyphs[name] = glyph
298 | hmtx.metrics[name] = (0, 0)
299 | glyf.glyphOrder.append(name)
300 |
301 | if composite:
302 | name = "bbox3"
303 | pen = TTGlyphPen(glyf.glyphOrder)
304 | pen.addComponent("bbox1", [1, 0, 0, 1, 0, 0])
305 | pen.addComponent("bbox2", [1, 0, 0, 1, 1000, 0])
306 | glyph = pen.glyph()
307 | glyph.recalcBounds(glyf)
308 | glyf.glyphs[name] = glyph
309 | hmtx.metrics[name] = (0, 0)
310 | glyf.glyphOrder.append(name)
311 |
312 | tableData = getSFNTData(font)[0]
313 | font.close()
314 | del font
315 | header, directory, tableData = defaultSFNTTestData(tableData=tableData, flavor="TTF")
316 | data = packSFNT(header, directory, tableData, flavor="TTF")
317 | return data
318 |
319 | writeTest(
320 | identifier="tabledata-transform-glyf-001",
321 | title="Valid TTF SFNT For Glyph BBox Calculation 1",
322 | description="TTF flavored SFNT font containing glyphs with the calculated bounding box matches the encoded one, the transformed glyf table in the output WOFF font must have bboxBitmap with all values as 0 and empty bboxStream.",
323 | shouldConvert=True,
324 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
325 | specLink="#conform-mustCalculateOmitBBoxValues",
326 | data=makeGlyfBBox1(),
327 | flavor="TTF"
328 | )
329 |
330 | writeTest(
331 | identifier="tabledata-transform-glyf-002",
332 | title="Valid TTF SFNT For Glyph BBox Calculation 2",
333 | description="TTF flavored SFNT font containing glyphs with the calculated bounding box differing from the encoded one, the transformed glyf table in the output WOFF font must have bboxBitmap and bboxStream set with the encoded bounding boxes.",
334 | shouldConvert=True,
335 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
336 | specLink="#conform-mustCalculateSetBBoxValues",
337 | data=makeGlyfBBox1(False),
338 | flavor="TTF"
339 | )
340 |
341 | writeTest(
342 | identifier="tabledata-transform-glyf-003",
343 | title="Valid TTF SFNT For Glyph BBox Calculation 3",
344 | description="TTF flavored SFNT font containing glyphs with the calculated bounding box differing from the encoded one and a composite glyph, the transformed glyf table in the output WOFF font must have bboxBitmap and bboxStream set with the encoded bounding boxes.",
345 | shouldConvert=True,
346 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
347 | specLink="#conform-mustSetCompositeBBoxValues",
348 | data=makeGlyfBBox1(False, True),
349 | flavor="TTF"
350 | )
351 |
352 | def makeGlyfBBox2(bbox):
353 | font = getTTFont(sfntTTFSourcePath, recalcBBoxes=False)
354 | glyf = font["glyf"]
355 | hmtx = font["hmtx"]
356 |
357 | name = "bbox1"
358 | glyph = getTableModule('glyf').Glyph()
359 | glyph.numberOfContours = 0
360 | glyph.xMin = glyph.xMax = glyph.yMin = glyph.yMax = bbox
361 | glyph.data = sstruct.pack(getTableModule('glyf').glyphHeaderFormat, glyph)
362 | glyf.glyphs[name] = glyph
363 | hmtx.metrics[name] = (0, 0)
364 | glyf.glyphOrder.append(name)
365 |
366 | tableData = getSFNTData(font)[0]
367 | font.close()
368 | del font
369 |
370 | header, directory, tableData = defaultSFNTTestData(tableData=tableData, flavor="TTF")
371 | data = packSFNT(header, directory, tableData, flavor="TTF")
372 | return data
373 |
374 | writeTest(
375 | identifier="tabledata-transform-glyf-004",
376 | title="Invalid TTF SFNT With Empty Glyph BBox 1",
377 | description="TTF flavored SFNT font containing a glyph with zero contours and non-zero bounding box values.",
378 | shouldConvert=False,
379 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
380 | specLink="#conform-mustRejectNonEmptyBBox",
381 | data=makeGlyfBBox2(10),
382 | flavor="TTF"
383 | )
384 |
385 | writeTest(
386 | identifier="tabledata-transform-glyf-005",
387 | title="Invalid TTF SFNT With Empty Glyph BBox 2",
388 | description="TTF flavored SFNT font containing a glyph with zero contours and zero bounding box values, the transformed glyf table in the output WOFF font must have bboxBitmap with all values as 0 and empty bboxStream.",
389 | shouldConvert=True,
390 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
391 | specLink="#conform-mustClearEmptyBBox",
392 | data=makeGlyfBBox2(0),
393 | flavor="TTF"
394 | )
395 |
396 | writeTest(
397 | identifier="tabledata-transform-hmtx-001",
398 | title="Valid TTF SFNT For Glyph LSB Elemination 1",
399 | description="TTF flavored SFNT font containing two proportional and two monospaced glyphs with left side bearings matching the Xmin values of each corresponding glyph bonding box. The hmtx table must be transformed with version 1 transform, eliminating both lsb[] and leftSideBearing[] arrays with corresponding Flags bits set.",
400 | shouldConvert=True,
401 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
402 | specLink="#conform-mustEliminateLSBs #conform-mustCheckLeftSideBearings",
403 | data=makeLSB1(),
404 | flavor="TTF"
405 | )
406 |
407 | writeTest(
408 | identifier="tabledata-transform-glyf-006",
409 | title="Valid TTF SFNT with Overlap Simple Bit",
410 | description=("TTF flavored SFNT font containing glyphs with the overlap simple bit set, the "
411 | "transformed glyf table in the output WOFF font must have overlapBitmap with 1 for "
412 | "all glyphs that have outlines."),
413 | shouldConvert=True,
414 | credits=[dict(title="Garret Rieger", role="author")],
415 | specLink="#conform-hasOverlap",
416 | data=makeGlyfOverlapBitmapSFNT(),
417 | flavor="TTF"
418 | )
419 |
420 | writeTest(
421 | identifier="tabledata-transform-glyf-007",
422 | title="Valid TTF SFNT with no Overlap Simple Bit",
423 | description=("TTF flavored SFNT font containing no glyphs with the overlap simple bit set, the "
424 | "transformed glyf table in the output WOFF font must not include an overlapBitmap."),
425 | shouldConvert=True,
426 | credits=[dict(title="Garret Rieger", role="author")],
427 | specLink="#conform-noOverlap",
428 | data=makeGlyfNoOverlapBitmapSFNT(),
429 | flavor="TTF"
430 | )
431 |
432 |
433 | # -----------
434 | # Collections
435 | # -----------
436 |
437 | def makeCollectionSharing1():
438 | data = getSFNTCollectionData([sfntTTFSourcePath, sfntTTFSourcePath], modifyNames=False)
439 |
440 | return data
441 |
442 | writeTest(
443 | identifier="collection-sharing-001",
444 | title="Valid Font Collection With No Duplicate Tables",
445 | description="TTF flavored SFNT collection with all tables being shared, output WOFF font must not contain any duplicate tables.",
446 | shouldConvert=True,
447 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
448 | specLink="#conform-mustNotDuplicateTables",
449 | data=makeCollectionSharing1(),
450 | flavor="TTC"
451 | )
452 |
453 | def makeCollectionSharing2():
454 | data = getSFNTCollectionData([sfntTTFSourcePath, sfntTTFSourcePath])
455 |
456 | return data
457 |
458 | writeTest(
459 | identifier="collection-sharing-002",
460 | title="Valid Font Collection With Shared Glyf/Loca",
461 | description="TTF flavored SFNT collection containing two fonts sharing the same glyf and loca tables.",
462 | shouldConvert=True,
463 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
464 | specLink="#conform-mustVerifyGlyfLocaShared",
465 | data=makeCollectionSharing2(),
466 | flavor="TTC"
467 | )
468 |
469 | def makeCollectionSharing3():
470 | data = getSFNTCollectionData([sfntTTFSourcePath, sfntTTFSourcePath, sfntTTFCompositeSourcePath])
471 |
472 | return data
473 |
474 | writeTest(
475 | identifier="collection-sharing-003",
476 | title="Valid Font Collection With Shared And Unshared Glyf/Loca",
477 | description="TTF flavored SFNT collection containing three fonts, two of them sharing the same glyf and loca tables and the third using different glyf and loca tables.",
478 | shouldConvert=True,
479 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
480 | specLink="#conform-mustNotDuplicateTables",
481 | data=makeCollectionSharing3(),
482 | flavor="TTC"
483 | )
484 |
485 | def makeCollectionSharing4():
486 | data = getSFNTCollectionData([sfntTTFSourcePath, sfntTTFSourcePath], duplicates=["loca"])
487 |
488 | return data
489 |
490 | writeTest(
491 | identifier="collection-sharing-004",
492 | title="Invalid Font Collection With Unshared Loca",
493 | description="An invalid TTF flavored SFNT collection containing two fonts sharing glyf but not loca table.",
494 | shouldConvert=False,
495 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
496 | specLink="#conform-mustRejectSingleGlyfLocaShared",
497 | data=makeCollectionSharing4(),
498 | flavor="TTC"
499 | )
500 |
501 | def makeCollectionSharing5():
502 | data = getSFNTCollectionData([sfntTTFSourcePath, sfntTTFSourcePath], duplicates=["glyf"])
503 |
504 | return data
505 |
506 | writeTest(
507 | identifier="collection-sharing-005",
508 | title="Invalid Font Collection With Unshared Glyf",
509 | description="An invalid TTF flavored SFNT collection containing two fonts sharing loca but not glyf table.",
510 | shouldConvert=False,
511 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
512 | specLink="#conform-mustRejectSingleGlyfLocaShared",
513 | data=makeCollectionSharing5(),
514 | flavor="TTC"
515 | )
516 |
517 | def makeCollectionSharing6():
518 | data = getSFNTCollectionData([sfntTTFSourcePath, sfntTTFSourcePath], shared=["cmap"])
519 |
520 | return data
521 |
522 | writeTest(
523 | identifier="collection-sharing-006",
524 | title="Font Collection With Single Shared Table",
525 | description="A valid TTF flavored SFNT collection containing two fonts sharing only the cmap table.",
526 | shouldConvert=True,
527 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
528 | specLink="#conform-mustEmitSingleEntryDirectory",
529 | data=makeCollectionSharing6(),
530 | flavor="TTC"
531 | )
532 |
533 | def makeCollectionTransform1():
534 | data = getSFNTCollectionData([sfntTTFSourcePath, sfntTTFCompositeSourcePath])
535 |
536 | return data
537 |
538 | writeTest(
539 | identifier="collection-transform-glyf-001",
540 | title="Valid Font Collection With Multiple Glyf/Loca",
541 | description="TTF flavored SFNT collection with multiple unshared glyf and loca tables, all of them must be transformed in the output WOFF font.",
542 | shouldConvert=True,
543 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
544 | specLink="#conform-mustTransformMultipleGlyfLoca",
545 | data=makeCollectionTransform1(),
546 | flavor="TTC"
547 | )
548 |
549 | def makeCollectionHmtxTransform1():
550 | fonts = [getTTFont(sfntTTFSourcePath), getTTFont(sfntTTFSourcePath)]
551 | for i, font in enumerate(fonts):
552 | glyf = font["glyf"]
553 | hmtx = font["hmtx"]
554 | maxp = font["maxp"]
555 | for name in glyf.glyphs:
556 | glyph = glyf.glyphs[name]
557 | glyph.expand(glyf)
558 | if hasattr(glyph, "xMin"):
559 | # Move the glyph so that xMin is 0
560 | pen = TTGlyphPen(None)
561 | glyph.draw(pen, glyf, -glyph.xMin)
562 | glyph = pen.glyph()
563 | glyph.recalcBounds(glyf)
564 | assert glyph.xMin == 0
565 | glyf.glyphs[name] = glyph
566 | hmtx.metrics[name] = (glyph.xMax, 0)
567 |
568 | # Build a unique glyph for each font, but with the same advance and LSB
569 | name = "box"
570 | pen = TTGlyphPen(None)
571 | pen.moveTo([0, 0])
572 | pen.lineTo([0, 1000])
573 | if i > 0:
574 | pen.lineTo([0, 2000])
575 | pen.lineTo([1000, 2000])
576 | pen.lineTo([1000, 1000])
577 | pen.lineTo([1000, 0])
578 | pen.closePath()
579 | glyph = pen.glyph()
580 | glyph.recalcBounds(glyf)
581 | glyf.glyphs[name] = glyph
582 | hmtx.metrics[name] = (glyph.xMax, glyph.xMin)
583 | glyf.glyphOrder.append(name)
584 | maxp.recalc(font)
585 | data = hmtx.compile(font)
586 | hmtx.decompile(data, font)
587 |
588 | data = getSFNTCollectionData(fonts, shared=["hmtx"])
589 |
590 | return data
591 |
592 | writeTest(
593 | identifier="collection-transform-hmtx-001",
594 | title="Valid Font Collection With Unshared Glyf And Shared Hmtx 1",
595 | description="TTF flavored SFNT collection with multiple unshared glyf tanles and one shared hmtx table where xMin and LSB of all glyphs is 0.",
596 | shouldConvert=True,
597 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
598 | specLink="#conform-mustCheckLSBAllGlyfTables",
599 | data=makeCollectionHmtxTransform1(),
600 | flavor="TTC"
601 | )
602 |
603 | def makeCollectionHmtxTransform2():
604 | fonts = [getTTFont(sfntTTFSourcePath), getTTFont(sfntTTFSourcePath)]
605 | for i, font in enumerate(fonts):
606 | glyf = font["glyf"]
607 | hmtx = font["hmtx"]
608 | maxp = font["maxp"]
609 | for name in glyf.glyphs:
610 | glyph = glyf.glyphs[name]
611 | glyph.expand(glyf)
612 | if hasattr(glyph, "xMin"):
613 | metrics = hmtx.metrics[name]
614 | if i == 0:
615 | # Move the glyph so that xMin is 0
616 | pen = TTGlyphPen(None)
617 | glyph.draw(pen, glyf, -glyph.xMin)
618 | glyph = pen.glyph()
619 | glyph.recalcBounds(glyf)
620 | assert glyph.xMin == 0
621 | glyf.glyphs[name] = glyph
622 | hmtx.metrics[name] = (metrics[0], 0)
623 |
624 | # Build a unique glyph for each font, but with the same advance and LSB
625 | name = "box"
626 | pen = TTGlyphPen(None)
627 | pen.moveTo([0, 0])
628 | pen.lineTo([0, 1000])
629 | if i > 0:
630 | pen.lineTo([0, 2000])
631 | pen.lineTo([1000, 2000])
632 | pen.lineTo([1000, 1000])
633 | pen.lineTo([1000, 0])
634 | pen.closePath()
635 | glyph = pen.glyph()
636 | glyph.recalcBounds(glyf)
637 | glyf.glyphs[name] = glyph
638 | hmtx.metrics[name] = (glyph.xMax, glyph.xMin)
639 | glyf.glyphOrder.append(name)
640 | maxp.recalc(font)
641 | data = hmtx.compile(font)
642 | hmtx.decompile(data, font)
643 |
644 | data = getSFNTCollectionData(fonts, shared=["hmtx"])
645 |
646 | return data
647 |
648 | writeTest(
649 | identifier="collection-transform-hmtx-002",
650 | title="Valid Font Collection With Unshared Glyf And Shared Hmtx 2",
651 | description="TTF flavored SFNT collection with multiple unshared glyf tanles and one shared hmtx table where LSB of all glyphs is 0, but xMin in some glyphs in the second glyf table is not 0.",
652 | shouldConvert=True,
653 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
654 | specLink="#conform-mustNotApplyLSBTransformForOTC",
655 | data=makeCollectionHmtxTransform2(),
656 | flavor="TTC"
657 | )
658 |
659 | writeTest(
660 | identifier="collection-pairing-001",
661 | title="Valid Font Collection With Glyf/Loca Pairs",
662 | description="TTF flavored SFNT collection with multiple unshared glyf and loca tables, glyf and loca tables from each font must be paired in the output WOFF font.",
663 | shouldConvert=True,
664 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
665 | specLink="#conform-mustReorderGlyfLoca",
666 | data=makeCollectionTransform1(),
667 | flavor="TTC"
668 | )
669 |
670 | def makeCollectionOrder1():
671 | data = getSFNTCollectionData([sfntTTFSourcePath, sfntTTFSourcePath, sfntTTFSourcePath], reverseNames=True)
672 |
673 | return data
674 |
675 | writeTest(
676 | identifier="tabledirectory-order-001",
677 | title="Valid Font Collection With Unsorted fonts",
678 | description="TTF flavored SFNT collection with fonts not in alphabetical order. WOFF creation process must reserve the font order.",
679 | shouldConvert=True,
680 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
681 | specLink="#conform-mustPreserveFontOrder",
682 | data=makeCollectionOrder1(),
683 | flavor="TTC"
684 | )
685 |
686 | writeTest(
687 | identifier="tabledirectory-collection-index-001",
688 | title="Valid Font Collection",
689 | description="TTF flavored SFNT collection. WOFF creation process must record the index of the matching TableDirectoryEntry into the CollectionFontEntry for each font.",
690 | shouldConvert=True,
691 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
692 | specLink="#conform-mustRecordCollectionEntryIndex",
693 | data=makeCollectionSharing2(),
694 | flavor="TTC"
695 | )
696 |
697 | def makeKnownTables():
698 | from testCaseGeneratorLib.woff import knownTableTags
699 |
700 | header, directory, tableData = defaultSFNTTestData(flavor="TTF")
701 |
702 | tags = [e["tag"] for e in directory]
703 | assert set(tags) < set(knownTableTags)
704 |
705 | data = packSFNT(header, directory, tableData, flavor="TTF")
706 | return data
707 |
708 | writeTest(
709 | identifier="tabledirectory-knowntags-001",
710 | title="Valid TTF SFNT With All Tables Known",
711 | description="TTF flavored SFNT font with all tables known. Output WOFF2 table directory must use known table flag for all tables in the font.",
712 | shouldConvert=True,
713 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
714 | specLink="#conform-mustUseKnownTags",
715 | data=makeKnownTables(),
716 | flavor="TTF"
717 | )
718 |
719 | dummyTables = ("ZZZA", "ZZZB", "ZZZC")
720 |
721 | def makeUnknownTables():
722 | from testCaseGeneratorLib.woff import knownTableTags
723 |
724 | header, directory, tableData = defaultSFNTTestData(flavor="TTF")
725 | # adjust the header
726 | header["numTables"] += len(dummyTables)
727 | # adjust the offsets
728 | shift = len(dummyTables) * sfntDirectoryEntrySize
729 | for entry in directory:
730 | entry["offset"] += shift
731 | # store the data
732 | sorter = [(entry["offset"], entry["length"]) for entry in directory]
733 | offset, length = max(sorter)
734 | offset = offset + length
735 | data = b"\0" * 4
736 | checksum = calcTableChecksum(None, data)
737 | for tag in dummyTables:
738 | tableData[tag] = data
739 | entry = dict(
740 | tag=tag,
741 | offset=offset,
742 | length=4,
743 | checksum=checksum
744 | )
745 | directory.append(entry)
746 | offset += 4
747 |
748 | tags = [e["tag"] for e in directory]
749 | assert not set(tags) < set(knownTableTags)
750 |
751 | data = packSFNT(header, directory, tableData, flavor="TTF")
752 | return data
753 |
754 | writeTest(
755 | identifier="tabledirectory-knowntags-002",
756 | title="Valid TTF SFNT With Some Tables Unknown",
757 | description="TTF flavored SFNT font with some tables unknown. Output WOFF2 table directory must use known table flag for known tables.",
758 | shouldConvert=True,
759 | credits=[dict(title="Khaled Hosny", role="author", link="http://khaledhosny.org")],
760 | specLink="#conform-mustUseKnownTags",
761 | data=makeUnknownTables(),
762 | flavor="TTF"
763 | )
764 |
765 | # ------------------
766 | # Generate the Index
767 | # ------------------
768 |
769 | print("Compiling index...")
770 |
771 | testGroups = []
772 |
773 | for tag, title, url, note in groupDefinitions:
774 | group = dict(title=title, url=url, testCases=testRegistry[tag], note=note)
775 | testGroups.append(group)
776 |
777 | generateAuthoringToolIndexHTML(directory=authoringToolTestDirectory, testCases=testGroups, note=indexNote)
778 |
779 | # ----------------
780 | # Generate the zip
781 | # ----------------
782 |
783 | print("Compiling zip file...")
784 |
785 | zipPath = os.path.join(authoringToolTestDirectory, "AuthoringToolTestFonts.zip")
786 | if os.path.exists(zipPath):
787 | os.remove(zipPath)
788 |
789 | allBinariesZip = zipfile.ZipFile(zipPath, "w")
790 |
791 | pattern = os.path.join(authoringToolTestDirectory, "*.?t?")
792 | for path in glob.glob(pattern):
793 | ext = os.path.splitext(path)[1]
794 | assert ext in (".otf", ".ttf", ".otc", ".ttc")
795 | allBinariesZip.write(path, os.path.basename(path))
796 |
797 | allBinariesZip.close()
798 |
799 | # ---------------------
800 | # Generate the Manifest
801 | # ---------------------
802 |
803 | print("Compiling manifest...")
804 |
805 | manifest = []
806 |
807 | for tag, title, url, note in groupDefinitions:
808 | for testCase in testRegistry[tag]:
809 | identifier = testCase["identifier"]
810 | title = testCase["title"]
811 | assertion = testCase["description"]
812 | links = "#" + testCase["specLink"].split("#")[-1]
813 | # XXX force the chapter onto the links
814 | links = "#TableDirectory," + links
815 | flags = ""
816 | credits = ""
817 | # format the line
818 | line = "%s\t%s\t%s\t%s\t%s\t%s" % (
819 | identifier, # id
820 | "", # reference
821 | title, # title
822 | flags, # flags
823 | links, # links
824 | assertion # assertion
825 | )
826 | # store
827 | manifest.append(line)
828 |
829 | path = os.path.join(authoringToolDirectory, "manifest.txt")
830 | if os.path.exists(path):
831 | os.remove(path)
832 | f = open(path, "w")
833 | f.write("\n".join(manifest))
834 | f.close()
835 |
836 | # -----------------------
837 | # Check for Unknown Files
838 | # -----------------------
839 |
840 | otfPattern = os.path.join(authoringToolTestDirectory, "*.otf")
841 | ttfPattern = os.path.join(authoringToolTestDirectory, "*.ttf")
842 | otcPattern = os.path.join(authoringToolTestDirectory, "*.otc")
843 | ttcPattern = os.path.join(authoringToolTestDirectory, "*.ttc")
844 | filesOnDisk = glob.glob(otfPattern) + glob.glob(ttfPattern) + glob.glob(otcPattern) + glob.glob(ttcPattern)
845 |
846 | for path in filesOnDisk:
847 | identifier = os.path.basename(path)
848 | identifier = identifier.split(".")[0]
849 | if identifier not in registeredIdentifiers:
850 | print("Unknown file:", path)
851 |
--------------------------------------------------------------------------------
/generators/resources/SFNT-TTF.ttx:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
108 |
109 |
110 |
111 |
112 |
113 |
114 |
115 |
116 |
117 |
118 |
119 |
120 |
121 |
122 |
123 |
124 |
125 |
126 |
127 |
128 |
129 |
130 |
131 |
132 |
133 |
134 |
135 |
136 |
137 |
138 |
139 |
140 |
141 |
142 |
143 |
144 |
145 |
146 |
147 |
148 |
149 |
150 |
151 |
152 |
153 |
154 |
155 |
156 |
157 |
158 |
159 |
160 |
161 |
162 |
163 |
164 |
165 |
166 |
167 |
168 |
169 |
170 |
171 |
172 |
173 |
174 |
175 |
176 |
177 |
178 |
179 |
180 |
181 |
182 |
183 |
184 |
185 |
186 |
187 |
188 |
189 |
190 |
191 |
192 |
193 |
194 |
195 |
196 |
197 |
198 |
199 |
200 |
201 |
202 |
203 |
204 |
205 |
206 |
207 |
208 |
209 |
210 |
211 |
212 |
213 |
214 |
215 |
216 |
217 |
218 |
219 |
220 |
221 |
222 |
223 |
224 |
225 |
226 |
227 |
228 |
229 |
230 |
231 |
232 |
233 |
234 |
235 |
236 |
237 |
238 |
239 |
240 |
241 |
242 |
243 |
244 |
245 |
246 |
247 |
248 |
249 |
250 |
251 |
252 |
253 |
254 |
255 |
256 |
257 |
258 |
259 |
260 |
261 |
262 |
263 |
264 |
265 |
266 |
267 |
268 |
269 |
270 |
271 |
272 |
273 |
274 |
275 |
276 |
277 |
278 |
279 |
280 |
281 |
282 |
283 |
284 |
285 |
286 |
287 |
288 |
289 |
290 |
291 |
292 |
293 |
294 |
295 |
296 |
297 |
298 |
299 |
300 |
301 |
302 |
303 |
304 |
305 |
306 |
307 |
308 |
309 |
310 |
311 |
312 |
313 |
314 |
315 |
316 |
317 |
318 |
319 |
320 |
321 |
322 |
323 |
324 |
325 |
326 |
327 |
328 |
329 |
330 |
331 |
332 |
333 |
334 |
335 |
336 |
337 |
338 |
339 |
340 |
341 |
342 |
343 |
344 |
345 |
346 |
347 |
348 |
349 |
350 |
351 |
352 |
353 |
354 |
355 |
356 |
357 |
358 |
359 |
360 |
361 |
362 |
363 |
364 |
365 |
366 |
367 |
368 |
369 |
370 |
371 |
372 |
373 |
374 |
375 |
376 |
377 |
378 |
379 |
380 |
381 |
382 |
383 |
384 |
385 |
386 |
387 |
388 |
389 |
390 |
391 |
392 |
393 |
394 |
395 |
396 |
397 |
398 |
399 |
400 |
401 |
402 |
403 |
404 |
405 |
406 |
407 |
408 |
409 |
410 |
411 |
412 |
413 |
414 |
415 |
416 |
417 |
418 |
419 |
420 |
421 |
422 |
423 |
424 |
425 |
426 |
427 |
428 |
429 |
430 |
431 |
432 |
433 |
434 |
435 |
436 |
437 |
438 |
439 |
440 |
441 |
442 |
443 |
444 |
445 |
446 |
447 |
448 |
449 |
450 |
451 |
452 |
453 |
454 |
455 |
456 |
457 |
458 |
459 |
460 |
461 |
462 |
463 |
464 |
465 |
466 |
467 |
468 |
469 |
470 |
471 |
472 |
473 |
474 |
475 |
476 |
477 |
478 |
479 |
480 |
481 |
482 |
483 |
484 |
485 |
486 |
487 |
488 |
489 |
490 |
491 |
492 |
493 |
494 |
495 |
496 |
497 |
498 |
499 |
500 |
501 |
502 |
503 |
504 |
505 |
506 |
507 |
508 |
509 |
510 |
511 |
512 |
513 |
514 |
515 |
516 |
517 |
518 |
519 |
520 |
521 |
522 |
523 |
524 |
525 |
526 |
527 |
528 |
529 |
530 |
531 |
532 |
533 |
534 |
535 |
536 |
537 |
538 |
539 |
540 |
541 |
542 |
543 |
544 |
545 |
546 |
547 |
548 |
549 |
550 |
551 |
552 |
553 |
554 |
555 |
556 |
557 |
558 |
559 |
560 |
561 |
562 |
563 |
564 |
565 |
566 |
567 |
568 |
569 |
570 |
571 |
572 |
573 |
574 |
575 |
576 |
577 |
578 |
579 |
580 |
581 |
582 |
583 |
584 |
585 |
586 |
587 |
588 |
589 |
590 |
591 |
592 |
593 |
594 |
595 |
596 |
597 |
598 |
599 |
600 |
601 |
602 |
603 |
604 |
605 |
606 |
607 |
608 |
609 |
610 |
611 |
612 |
613 |
614 |
615 |
616 |
617 |
618 |
619 |
620 |
621 |
622 |
623 |
624 |
625 |
626 |
627 |
628 |
629 |
630 |
631 |
632 |
633 |
634 |
635 |
636 |
637 |
638 |
639 |
640 |
641 |
642 |
643 |
644 |
645 |
646 |
647 |
648 |
649 |
650 |
651 |
652 |
653 |
654 |
655 |
656 |
657 |
658 |
659 |
660 |
661 |
662 |
663 |
664 |
665 |
666 |
667 |
668 |
670 |
671 |
672 |
673 |
674 |
675 |
676 |
677 |
678 |
679 |
680 |
681 |
682 |
683 |
684 |
685 |
686 |
687 |
688 |
689 |
690 |
691 |
692 |
693 |
694 |
695 |
696 |
697 |
698 |
699 |
700 |
701 |
702 |
703 |
704 |
705 |
706 |
707 |
708 |
709 |
710 |
711 |
712 |
713 |
714 |
715 |
716 |
717 |
718 |
719 |
720 |
721 |
722 |
723 |
724 |
725 |
726 |
727 |
728 |
729 |
730 |
731 |
732 |
733 |
734 |
735 |
736 |
737 |
738 |
739 |
740 |
741 |
742 |
743 |
744 |
745 |
746 |
747 |
748 |
749 |
750 |
751 |
752 |
753 |
754 |
755 |
756 |
757 |
758 |
759 |
760 |
761 |
762 |
763 |
764 |
765 |
766 |
767 |
768 |
769 |
770 |
771 |
772 |
773 |
774 |
775 |
776 |
777 |
778 |
779 |
780 |
781 |
782 |
783 |
784 |
785 |
786 |
787 |
788 |
789 |
790 |
791 |
792 |
793 |
794 |
795 |
796 |
797 |
798 |
799 |
800 |
801 |
802 |
803 |
804 |
805 |
806 |
807 |
808 |
809 |
810 |
811 |
812 |
813 |
814 |
815 |
816 |
817 |
818 |
819 |
820 |
821 |
822 |
823 |
824 |
825 |
826 |
827 |
828 |
829 |
830 |
831 |
832 |
833 |
834 |
835 |
836 |
837 |
838 |
839 |
840 |
841 |
842 |
843 |
844 |
845 |
846 |
847 |
848 |
849 |
850 |
851 |
852 |
853 |
854 |
855 |
856 |
857 |
858 |
859 |
860 |
861 |
862 |
863 |
864 |
865 |
866 |
867 |
868 |
869 |
870 |
871 |
872 |
873 |
874 |
875 |
876 |
877 |
878 |
879 |
880 |
881 |
882 |
883 |
884 |
885 |
886 |
887 |
888 |
889 |
890 |
891 |
892 |
893 |
894 |
895 |
896 |
897 |
898 |
899 |
900 |
901 |
902 |
903 |
904 |
905 |
906 |
907 |
908 |
909 |
910 |
911 |
912 |
913 |
914 |
915 |
916 |
917 |
918 |
919 |
920 |
921 |
922 |
923 |
924 |
925 |
926 |
927 |
928 |
929 |
930 |
931 |
932 |
933 |
934 |
935 |
936 |
937 |
938 |
939 |
940 |
941 |
942 |
943 |
944 |
945 |
946 |
947 | WOFF Test TTF
948 |
949 |
950 | Regular
951 |
952 |
953 | WebFontsWorkingGroup:WOFF Test TTF Regular:2010
954 |
955 |
956 | WOFF Test TTF Regular
957 |
958 |
959 | Version 001.000 2010
960 |
961 |
962 | WOFFTestTTF-Regular
963 |
964 |
965 | WebFonts Working Group
966 |
967 |
968 | WOFF Test TTF
969 |
970 |
971 | Regular
972 |
973 |
974 | WebFontsWorkingGroup:WOFF Test TTF Regular:2010
975 |
976 |
977 | WOFF Test TTF-Regular
978 |
979 |
980 | Version 001.000 2010
981 |
982 |
983 | WOFFTestTTF-Regular
984 |
985 |
986 | WebFonts Working Group
987 |
988 |
989 |
990 |
991 |
992 |
993 |
994 |
995 |
996 |
997 |
998 |
999 |
1000 |
1001 |
1002 |
1003 |
--------------------------------------------------------------------------------