Backed out 10 changesets (bug 1712151, bug 1724279, bug 1730712, bug 1717051, bug 1723031, bug 1731145) for causing failures on test_yaml.py

Backed out changeset 7f64d538701b (bug 1723031)
Backed out changeset 394152994966 (bug 1723031)
Backed out changeset 9bfeb01bcc9a (bug 1723031)
Backed out changeset 3d283616a57d (bug 1730712)
Backed out changeset bc677b409650 (bug 1724279)
Backed out changeset 784c94c2f528 (bug 1723031)
Backed out changeset 6e1bde40e3b4 (bug 1723031)
Backed out changeset 7adf7e2136a3 (bug 1712151)
Backed out changeset 2aef162b9a1b (bug 1717051)
Backed out changeset 9beeb6d3d95b (bug 1731145)
This commit is contained in:
criss 2021-09-28 00:32:38 +03:00
parent f968bf6006
commit f2dcba95fa
126 changed files with 310 additions and 17312 deletions

7
.gitignore vendored
View file

@ -14,16 +14,11 @@ ID
!id/
.DS_Store*
*.pdb
*.egg-info
.eslintcache
# Filesystem temporaries
.fuse_hidden*
# Ignore Python .egg-info directories for first-party modules (but,
# still add vendored packages' .egg-info directories)
*.egg-info
!third_party/python/**/*.egg-info
!testing/web-platform/tests/tools/third_party/**/*.egg-info
# Vim swap files.
.*.sw[a-z]
.sw[a-z]

View file

@ -7,16 +7,13 @@
(^|/)ID$
(^|/)\.DS_Store$
\.pdb
\.egg-info
\.eslintcache
\.gcda
\.gcno
\.gcov
compile_commands\.json
# Ignore Python .egg-info directories for first-party modules (but,
# still add vendored packages' .egg-info directories)
^(?=.*\.egg-info/)(?!^third_party/python/)(?!^testing/web-platform/tests/tools/third_party/)
# Vim swap files.
^\.sw.$
.[^/]*\.sw.$

View file

@ -1,2 +1,2 @@
packages.txt:build/common_virtualenv_packages.txt
vendored:third_party/python/glean_parser
pth:third_party/python/glean_parser

View file

@ -44,82 +44,82 @@ pth:testing/mozbase/mozversion
pth:testing/raptor
pth:testing/talos
pth:testing/web-platform
vendored:testing/web-platform/tests/tools/third_party/h2
vendored:testing/web-platform/tests/tools/third_party/hpack
vendored:testing/web-platform/tests/tools/third_party/html5lib
vendored:testing/web-platform/tests/tools/third_party/hyperframe
vendored:testing/web-platform/tests/tools/third_party/pywebsocket3
vendored:testing/web-platform/tests/tools/third_party/webencodings
vendored:testing/web-platform/tests/tools/wptserve
vendored:testing/web-platform/tests/tools/wptrunner
pth:testing/web-platform/tests/tools/third_party/certifi
pth:testing/web-platform/tests/tools/third_party/h2
pth:testing/web-platform/tests/tools/third_party/hpack
pth:testing/web-platform/tests/tools/third_party/html5lib
pth:testing/web-platform/tests/tools/third_party/hyperframe
pth:testing/web-platform/tests/tools/third_party/pywebsocket3
pth:testing/web-platform/tests/tools/third_party/webencodings
pth:testing/web-platform/tests/tools/wptserve
pth:testing/web-platform/tests/tools/wptrunner
pth:testing/xpcshell
vendored:third_party/python/aiohttp
vendored:third_party/python/appdirs
vendored:third_party/python/async_timeout
vendored:third_party/python/atomicwrites
vendored:third_party/python/attrs
vendored:third_party/python/blessings
vendored:third_party/python/cbor2
vendored:third_party/python/certifi
vendored:third_party/python/chardet
vendored:third_party/python/Click
vendored:third_party/python/compare_locales
vendored:third_party/python/cookies
vendored:third_party/python/cram
vendored:third_party/python/diskcache
vendored:third_party/python/distro
vendored:third_party/python/dlmanager
vendored:third_party/python/ecdsa
vendored:third_party/python/esprima
vendored:third_party/python/fluent.migrate
vendored:third_party/python/fluent.syntax
vendored:third_party/python/funcsigs
vendored:third_party/python/gyp/pylib
vendored:third_party/python/idna
vendored:third_party/python/idna-ssl
vendored:third_party/python/importlib_metadata
vendored:third_party/python/iso8601
vendored:third_party/python/Jinja2
vendored:third_party/python/jsmin
vendored:third_party/python/json-e
vendored:third_party/python/jsonschema
vendored:third_party/python/MarkupSafe/src
vendored:third_party/python/mohawk
vendored:third_party/python/more_itertools
vendored:third_party/python/mozilla_version
vendored:third_party/python/multidict
vendored:third_party/python/packaging
vendored:third_party/python/pathspec
vendored:third_party/python/pip_tools
vendored:third_party/python/pluggy
vendored:third_party/python/ply
vendored:third_party/python/py
vendored:third_party/python/pyasn1
vendored:third_party/python/pyasn1_modules
vendored:third_party/python/pylru
vendored:third_party/python/pyparsing
vendored:third_party/python/pyrsistent
vendored:third_party/python/pystache
vendored:third_party/python/pytest
vendored:third_party/python/python-hglib
vendored:third_party/python/pytoml
vendored:third_party/python/PyYAML/lib3/
vendored:third_party/python/redo
vendored:third_party/python/requests
vendored:third_party/python/requests_unixsocket
vendored:third_party/python/responses
vendored:third_party/python/rsa
vendored:third_party/python/sentry_sdk
vendored:third_party/python/six
vendored:third_party/python/slugid
vendored:third_party/python/taskcluster
vendored:third_party/python/taskcluster_urls
vendored:third_party/python/typing_extensions
vendored:third_party/python/urllib3
vendored:third_party/python/voluptuous
vendored:third_party/python/yamllint
vendored:third_party/python/yarl
vendored:third_party/python/zipp
pth:third_party/python/aiohttp
pth:third_party/python/appdirs
pth:third_party/python/async_timeout
pth:third_party/python/atomicwrites
pth:third_party/python/attrs
pth:third_party/python/blessings
pth:third_party/python/cbor2
pth:third_party/python/chardet
pth:third_party/python/Click
pth:third_party/python/compare_locales
pth:third_party/python/cookies
pth:third_party/python/cram
pth:third_party/python/diskcache
pth:third_party/python/distro
pth:third_party/python/dlmanager
pth:third_party/python/ecdsa
pth:third_party/python/esprima
pth:third_party/python/fluent.migrate
pth:third_party/python/fluent.syntax
pth:third_party/python/funcsigs
pth:third_party/python/gyp/pylib
pth:third_party/python/idna
pth:third_party/python/idna-ssl
pth:third_party/python/importlib_metadata
pth:third_party/python/iso8601
pth:third_party/python/Jinja2
pth:third_party/python/jsmin
pth:third_party/python/json-e
pth:third_party/python/jsonschema
pth:third_party/python/MarkupSafe/src
pth:third_party/python/mohawk
pth:third_party/python/more_itertools
pth:third_party/python/mozilla_version
pth:third_party/python/multidict
pth:third_party/python/packaging
pth:third_party/python/pathspec
pth:third_party/python/pip_tools
pth:third_party/python/pluggy
pth:third_party/python/ply
pth:third_party/python/py
pth:third_party/python/pyasn1
pth:third_party/python/pyasn1_modules
pth:third_party/python/pylru
pth:third_party/python/pyparsing
pth:third_party/python/pyrsistent
pth:third_party/python/pystache
pth:third_party/python/pytest
pth:third_party/python/python-hglib
pth:third_party/python/pytoml
pth:third_party/python/PyYAML/lib3/
pth:third_party/python/redo
pth:third_party/python/requests
pth:third_party/python/requests_unixsocket
pth:third_party/python/responses
pth:third_party/python/rsa
pth:third_party/python/sentry_sdk
pth:third_party/python/six
pth:third_party/python/slugid
pth:third_party/python/taskcluster
pth:third_party/python/taskcluster_urls
pth:third_party/python/typing_extensions
pth:third_party/python/urllib3
pth:third_party/python/voluptuous
pth:third_party/python/yamllint
pth:third_party/python/yarl
pth:third_party/python/zipp
pth:toolkit/components/telemetry/tests/marionette/harness
pth:tools
pth:tools/moztreedocs

View file

@ -9,7 +9,6 @@ import os
import platform
import shutil
import site
import subprocess
import sys
if sys.version_info[0] < 3:
@ -142,6 +141,35 @@ CATEGORIES = {
},
}
def search_path(mozilla_dir, packages_txt):
with open(os.path.join(mozilla_dir, packages_txt)) as f:
packages = [
line.strip().split(":", maxsplit=1)
for line in f
if not line.lstrip().startswith("#")
]
def handle_package(action, package):
if action == "packages.txt":
for p in search_path(mozilla_dir, package):
yield os.path.join(mozilla_dir, p)
if action == "pth":
yield os.path.join(mozilla_dir, package)
for current_action, current_package in packages:
for path in handle_package(current_action, current_package):
yield path
def mach_sys_path(mozilla_dir):
return [
os.path.join(mozilla_dir, path)
for path in search_path(mozilla_dir, "build/mach_virtualenv_packages.txt")
]
INSTALL_PYTHON_GUIDANCE_LINUX = """
See https://firefox-source-docs.mozilla.org/setup/linux_build.html#installingpython
for guidance on how to install Python on your system.
@ -168,102 +196,6 @@ install a recent enough Python 3.
""".strip()
def _scrub_system_site_packages():
site_paths = set(site.getsitepackages() + [site.getusersitepackages()])
sys.path = [path for path in sys.path if path not in site_paths]
def _activate_python_environment(topsrcdir):
# We need the "mach" module to access the logic to parse virtualenv
# requirements. Since that depends on "packaging" (and, transitively,
# "pyparsing"), we add those to the path too.
sys.path[0:0] = [
os.path.join(topsrcdir, module)
for module in (
os.path.join("python", "mach"),
os.path.join("third_party", "python", "packaging"),
os.path.join("third_party", "python", "pyparsing"),
)
]
from mach.requirements import MachEnvRequirements
thunderbird_dir = os.path.join(topsrcdir, "comm")
is_thunderbird = os.path.exists(thunderbird_dir) and bool(
os.listdir(thunderbird_dir)
)
requirements = MachEnvRequirements.from_requirements_definition(
topsrcdir,
is_thunderbird,
True,
os.path.join(topsrcdir, "build", "mach_virtualenv_packages.txt"),
)
if os.environ.get("MACH_USE_SYSTEM_PYTHON") or os.environ.get("MOZ_AUTOMATION"):
env_var = (
"MOZ_AUTOMATION"
if os.environ.get("MOZ_AUTOMATION")
else "MACH_USE_SYSTEM_PYTHON"
)
has_pip = (
subprocess.run(
[sys.executable, "-c", "import pip"], stderr=subprocess.DEVNULL
).returncode
== 0
)
# There are environments in CI that aren't prepared to provide any Mach dependency
# packages. Changing this is a nontrivial endeavour, so guard against having
# non-optional Mach requirements.
assert (
not requirements.pypi_requirements
), "Mach pip package requirements must be optional."
if has_pip:
pip = [sys.executable, "-m", "pip"]
check_result = subprocess.run(
pip + ["check"],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
)
if check_result.returncode:
print(check_result.stdout, file=sys.stderr)
subprocess.check_call(pip + ["list", "-v"], stdout=sys.stderr)
raise Exception(
'According to "pip check", the current Python '
"environment has package-compatibility issues."
)
package_result = requirements.validate_environment_packages(pip)
if not package_result.has_all_packages:
print(
"Skipping automatic management of Python dependencies since "
f"the '{env_var}' environment variable is set.\n"
"The following issues were found while validating your Python "
"environment:"
)
print(package_result.report())
sys.exit(1)
else:
# Pip isn't installed to the system Python environment, so we can't use
# it to verify compatibility with Mach. Remove the system site-packages
# from the import scope so that Mach behaves as though all of its
# (optional) dependencies are not installed.
_scrub_system_site_packages()
elif sys.prefix == sys.base_prefix:
# We're in an environment where we normally use the Mach virtualenv,
# but we're running a "nativecmd" such as "create-mach-environment".
# Remove global site packages from sys.path to improve isolation accordingly.
_scrub_system_site_packages()
sys.path[0:0] = [
os.path.join(topsrcdir, pth.path)
for pth in requirements.pth_requirements + requirements.vendored_requirements
]
def initialize(topsrcdir):
# Ensure we are running Python 3.6+. We run this check as soon as
# possible to avoid a cryptic import/usage error.
@ -287,9 +219,15 @@ def initialize(topsrcdir):
if os.path.exists(deleted_dir):
shutil.rmtree(deleted_dir, ignore_errors=True)
state_dir = _create_state_dir()
_activate_python_environment(topsrcdir)
if sys.prefix == sys.base_prefix:
# We are not in a virtualenv. Remove global site packages
# from sys.path.
site_paths = set(site.getsitepackages() + [site.getusersitepackages()])
sys.path = [path for path in sys.path if path not in site_paths]
state_dir = _create_state_dir()
sys.path[0:0] = mach_sys_path(topsrcdir)
import mach.base
import mach.main
from mach.util import setenv

View file

@ -3,7 +3,5 @@ packages.txt:build/common_virtualenv_packages.txt
# and it has to be built from source.
pypi-optional:glean-sdk==40.0.0:telemetry will not be collected
# Mach gracefully handles the case where `psutil` is unavailable.
# We aren't (yet) able to pin packages in automation, so we have to
# support down to the oldest locally-installed version (5.4.2).
pypi-optional:psutil>=5.4.2,<=5.8.0:telemetry will be missing some data
pypi-optional:zstandard>=0.11.1,<=0.15.2:zstd archives will not be possible to extract
pypi-optional:psutil==5.8.0:telemetry will be missing some data
pypi:zstandard==0.15.2

View file

@ -1,3 +1,2 @@
packages.txt:build/common_virtualenv_packages.txt
vendored:third_party/python/glean_parser
pth:third_party/python/glean_parser

View file

@ -21,11 +21,8 @@ except ImportError:
base_dir = os.path.abspath(os.path.dirname(__file__))
sys.path.insert(0, os.path.join(base_dir, "python", "mach"))
sys.path.insert(0, os.path.join(base_dir, "python", "mozboot"))
sys.path.insert(0, os.path.join(base_dir, "python", "mozbuild"))
sys.path.insert(0, os.path.join(base_dir, "third_party", "python", "packaging"))
sys.path.insert(0, os.path.join(base_dir, "third_party", "python", "pyparsing"))
sys.path.insert(0, os.path.join(base_dir, "third_party", "python", "six"))
from mozbuild.configure import (
ConfigureSandbox,

View file

@ -1,242 +0,0 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import json
import os
from pathlib import Path
import subprocess
from packaging.requirements import Requirement
THUNDERBIRD_PYPI_ERROR = """
Thunderbird requirements definitions cannot include PyPI packages.
""".strip()
class EnvironmentPackageValidationResult:
def __init__(self):
self._package_discrepancies = []
self.has_all_packages = True
def add_discrepancy(self, requirement, found):
self._package_discrepancies.append((requirement, found))
self.has_all_packages = False
def report(self):
lines = []
for requirement, found in self._package_discrepancies:
if found:
error = f'Installed with unexpected version "{found}"'
else:
error = "Not installed"
lines.append(f"{requirement}: {error}")
return "\n".join(lines)
class PthSpecifier:
def __init__(self, path):
self.path = path
class PypiSpecifier:
def __init__(self, requirement):
self.requirement = requirement
class PypiOptionalSpecifier(PypiSpecifier):
def __init__(self, repercussion, requirement):
super().__init__(requirement)
self.repercussion = repercussion
class MachEnvRequirements:
"""Requirements associated with a "virtualenv_packages.txt" definition
Represents the dependencies of a virtualenv. The source files consist
of colon-delimited fields. The first field
specifies the action. The remaining fields are arguments to that
action. The following actions are supported:
pth -- Adds the path given as argument to "mach.pth" under
the virtualenv site packages directory.
pypi -- Fetch the package, plus dependencies, from PyPI.
pypi-optional -- Attempt to install the package and dependencies from PyPI.
Continue using the virtualenv, even if the package could not be installed.
packages.txt -- Denotes that the specified path is a child manifest. It
will be read and processed as if its contents were concatenated
into the manifest being read.
thunderbird-packages.txt -- Denotes a Thunderbird child manifest.
Thunderbird child manifests are only activated when working on Thunderbird,
and they can cannot have "pypi" or "pypi-optional" entries.
"""
def __init__(self):
self.requirements_paths = []
self.pth_requirements = []
self.pypi_requirements = []
self.pypi_optional_requirements = []
self.vendored_requirements = []
def validate_environment_packages(self, pip_command):
result = EnvironmentPackageValidationResult()
if not self.pypi_requirements and not self.pypi_optional_requirements:
return result
pip_json = subprocess.check_output(
pip_command + ["list", "--format", "json"], universal_newlines=True
)
installed_packages = json.loads(pip_json)
installed_packages = {
package["name"]: package["version"] for package in installed_packages
}
for pkg in self.pypi_requirements:
installed_version = installed_packages.get(pkg.requirement.name)
if not installed_version or not pkg.requirement.specifier.contains(
installed_version
):
result.add_discrepancy(pkg.requirement, installed_version)
for pkg in self.pypi_optional_requirements:
installed_version = installed_packages.get(pkg.requirement.name)
if installed_version and not pkg.requirement.specifier.contains(
installed_version
):
result.add_discrepancy(pkg.requirement, installed_version)
return result
@classmethod
def from_requirements_definition(
cls,
topsrcdir,
is_thunderbird,
is_mach_or_build_virtualenv,
requirements_definition,
):
requirements = cls()
_parse_mach_env_requirements(
requirements,
requirements_definition,
topsrcdir,
is_thunderbird,
is_mach_or_build_virtualenv,
)
return requirements
def _parse_mach_env_requirements(
requirements_output,
root_requirements_path,
topsrcdir,
is_thunderbird,
is_mach_or_build_virtualenv,
):
topsrcdir = Path(topsrcdir)
def _parse_requirements_line(
current_requirements_path, line, line_number, is_thunderbird_packages_txt
):
line = line.strip()
if not line or line.startswith("#"):
return
action, params = line.rstrip().split(":", maxsplit=1)
if action == "pth":
path = topsrcdir / params
if not path.exists():
# In sparse checkouts, not all paths will be populated.
return
for child in path.iterdir():
if child.name.endswith(".dist-info"):
raise Exception(
f'The "pth:" pointing to "{path}" has a ".dist-info" file.\n'
f'Perhaps "{current_requirements_path}:{line_number}" '
'should change to start with "vendored:" instead of "pth:".'
)
if child.name == "PKG-INFO":
raise Exception(
f'The "pth:" pointing to "{path}" has a "PKG-INFO" file.\n'
f'Perhaps "{current_requirements_path}:{line_number}" '
'should change to start with "vendored:" instead of "pth:".'
)
requirements_output.pth_requirements.append(PthSpecifier(params))
elif action == "vendored":
requirements_output.vendored_requirements.append(PthSpecifier(params))
elif action == "packages.txt":
_parse_requirements_definition_file(
os.path.join(topsrcdir, params),
is_thunderbird_packages_txt,
)
elif action == "pypi":
if is_thunderbird_packages_txt:
raise Exception(THUNDERBIRD_PYPI_ERROR)
requirements_output.pypi_requirements.append(
PypiSpecifier(
_parse_package_specifier(params, is_mach_or_build_virtualenv)
)
)
elif action == "pypi-optional":
if is_thunderbird_packages_txt:
raise Exception(THUNDERBIRD_PYPI_ERROR)
if len(params.split(":", maxsplit=1)) != 2:
raise Exception(
"Expected pypi-optional package to have a repercussion "
'description in the format "package:fallback explanation", '
'found "{}"'.format(params)
)
raw_requirement, repercussion = params.split(":")
requirements_output.pypi_optional_requirements.append(
PypiOptionalSpecifier(
repercussion,
_parse_package_specifier(
raw_requirement, is_mach_or_build_virtualenv
),
)
)
elif action == "thunderbird-packages.txt":
if is_thunderbird:
_parse_requirements_definition_file(
os.path.join(topsrcdir, params), is_thunderbird_packages_txt=True
)
else:
raise Exception("Unknown requirements definition action: %s" % action)
def _parse_requirements_definition_file(
requirements_path, is_thunderbird_packages_txt
):
"""Parse requirements file into list of requirements"""
assert os.path.isfile(requirements_path)
requirements_output.requirements_paths.append(requirements_path)
with open(requirements_path, "r") as requirements_file:
lines = [line for line in requirements_file]
for number, line in enumerate(lines, start=1):
_parse_requirements_line(
requirements_path, line, number, is_thunderbird_packages_txt
)
_parse_requirements_definition_file(root_requirements_path, False)
def _parse_package_specifier(raw_requirement, is_mach_or_build_virtualenv):
requirement = Requirement(raw_requirement)
if not is_mach_or_build_virtualenv and [
s for s in requirement.specifier if s.operator != "=="
]:
raise Exception(
'All virtualenvs except for "mach" and "build" must pin pypi package '
f'versions in the format "package==version", found "{raw_requirement}"'
)
return requirement

View file

@ -12,9 +12,3 @@ skip-if = python == 3
skip-if = python == 3
[test_logger.py]
[test_mach.py]
[test_virtualenv_compatibility.py]
# The Windows and Mac workers only use the internal PyPI mirror,
# which will be missing packages required for this test.
skip-if =
os == "win"
os == "mac"

View file

@ -1,136 +0,0 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import shutil
import subprocess
import sys
from pathlib import Path
import mozunit
from buildconfig import topsrcdir
from mach.requirements import MachEnvRequirements
def _resolve_command_virtualenv_names():
virtualenv_names = []
for child in (Path(topsrcdir) / "build").iterdir():
if not child.name.endswith("_virtualenv_packages.txt"):
continue
if child.name == "mach_virtualenv_packages.txt":
continue
virtualenv_names.append(child.name[: -len("_virtualenv_packages.txt")])
return virtualenv_names
def _requirement_definition_to_pip_format(virtualenv_name, cache, is_mach_or_build_env):
"""Convert from parsed requirements object to pip-consumable format"""
path = Path(topsrcdir) / "build" / f"{virtualenv_name}_virtualenv_packages.txt"
requirements = MachEnvRequirements.from_requirements_definition(
topsrcdir, False, is_mach_or_build_env, path
)
lines = []
for pypi in (
requirements.pypi_requirements + requirements.pypi_optional_requirements
):
lines.append(str(pypi.requirement))
for vendored in requirements.vendored_requirements:
lines.append(cache.package_for_vendor_dir(Path(vendored.path)))
return "\n".join(lines)
class PackageCache:
def __init__(self, storage_dir: Path):
self._cache = {}
self._storage_dir = storage_dir
def package_for_vendor_dir(self, vendor_path: Path):
if vendor_path in self._cache:
return self._cache[vendor_path]
if not any((p for p in vendor_path.iterdir() if p.name.endswith(".dist-info"))):
# This vendored package is not a wheel. It may be a source package (with
# a setup.py), or just some Python code that was manually copied into the
# tree. If it's a source package, the setup.py file may be up a few levels
# from the referenced Python module path.
package_dir = vendor_path
while True:
if (package_dir / "setup.py").exists():
break
elif package_dir.parent == package_dir:
raise Exception(
f'Package "{vendor_path}" is not a wheel and does not have a '
'setup.py file. Perhaps it should be "pth:" instead of '
'"vendored:"?'
)
package_dir = package_dir.parent
self._cache[vendor_path] = str(package_dir)
return str(package_dir)
# Pip requires that wheels have a version number in their name, even if
# it ignores it. We should parse out the version and put it in here
# so that failure debugging is easier, but that's non-trivial work.
# So, this "0" satisfies pip's naming requirement while being relatively
# obvious that it's a placeholder.
output_path = str(self._storage_dir / f"{vendor_path.name}-0-py3-none-any")
shutil.make_archive(output_path, "zip", vendor_path)
whl_path = output_path + ".whl"
os.rename(output_path + ".zip", whl_path)
self._cache[vendor_path] = whl_path
return whl_path
def test_virtualenvs_compatible(tmpdir):
command_virtualenv_names = _resolve_command_virtualenv_names()
work_dir = Path(tmpdir)
cache = PackageCache(work_dir)
mach_requirements = _requirement_definition_to_pip_format("mach", cache, True)
# Create virtualenv to try to install all dependencies into.
subprocess.check_call(
[
sys.executable,
os.path.join(
topsrcdir,
"third_party",
"python",
"virtualenv",
"virtualenv.py",
),
"--no-download",
str(work_dir / "env"),
]
)
for name in command_virtualenv_names:
print(f'Checking compatibility of "{name}" virtualenv')
command_requirements = _requirement_definition_to_pip_format(
name, cache, name == "build"
)
with open(work_dir / "requirements.txt", "w") as requirements_txt:
requirements_txt.write(mach_requirements)
requirements_txt.write("\n")
requirements_txt.write(command_requirements)
# Attempt to install combined set of dependencies (global Mach + current
# command)
subprocess.check_call(
[
str(work_dir / "env" / "bin" / "pip"),
"install",
"-r",
str(work_dir / "requirements.txt"),
],
cwd=topsrcdir,
)
if __name__ == "__main__":
mozunit.main()

View file

@ -22,7 +22,6 @@ from manifestparser import filters as mpf
from mach.decorators import CommandArgument, Command
from mach.requirements import MachEnvRequirements
from mach.util import UserError
here = os.path.abspath(os.path.dirname(__file__))
@ -69,20 +68,11 @@ def python(
raise UserError("Cannot pass both --requirements and --no-virtualenv.")
if no_virtualenv:
python_path = sys.executable
requirements = MachEnvRequirements.from_requirements_definition(
command_context.topsrcdir,
False,
True,
os.path.join(
command_context.topsrcdir, "build", "mach_virtualenv_packages.txt"
),
)
from mach_initialize import mach_sys_path
python_path = sys.executable
append_env["PYTHONPATH"] = os.pathsep.join(
os.path.join(command_context.topsrcdir, pth.path)
for pth in requirements.pth_requirements
+ requirements.vendored_requirements
mach_sys_path(command_context.topsrcdir)
)
else:
command_context.virtualenv_manager.ensure()

View file

@ -0,0 +1,144 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
THUNDERBIRD_PYPI_ERROR = """
Thunderbird requirements definitions cannot include PyPI packages.
""".strip()
class PthSpecifier:
def __init__(self, path):
self.path = path
class PypiSpecifier:
def __init__(self, package_name, version, full_specifier):
self.package_name = package_name
self.version = version
self.full_specifier = full_specifier
class PypiOptionalSpecifier:
def __init__(self, repercussion, package_name, version, full_specifier):
self.repercussion = repercussion
self.package_name = package_name
self.version = version
self.full_specifier = full_specifier
class MachEnvRequirements:
"""Requirements associated with a "virtualenv_packages.txt" definition
Represents the dependencies of a virtualenv. The source files consist
of colon-delimited fields. The first field
specifies the action. The remaining fields are arguments to that
action. The following actions are supported:
pth -- Adds the path given as argument to "mach.pth" under
the virtualenv site packages directory.
pypi -- Fetch the package, plus dependencies, from PyPI.
pypi-optional -- Attempt to install the package and dependencies from PyPI.
Continue using the virtualenv, even if the package could not be installed.
packages.txt -- Denotes that the specified path is a child manifest. It
will be read and processed as if its contents were concatenated
into the manifest being read.
thunderbird-packages.txt -- Denotes a Thunderbird child manifest.
Thunderbird child manifests are only activated when working on Thunderbird,
and they can cannot have "pypi" or "pypi-optional" entries.
"""
def __init__(self):
self.requirements_paths = []
self.pth_requirements = []
self.pypi_requirements = []
self.pypi_optional_requirements = []
@classmethod
def from_requirements_definition(
cls, topsrcdir, is_thunderbird, requirements_definition
):
requirements = cls()
_parse_mach_env_requirements(
requirements, requirements_definition, topsrcdir, is_thunderbird
)
return requirements
def _parse_mach_env_requirements(
requirements_output, root_requirements_path, topsrcdir, is_thunderbird
):
def _parse_requirements_line(line, is_thunderbird_packages_txt):
line = line.strip()
if not line or line.startswith("#"):
return
action, params = line.rstrip().split(":", maxsplit=1)
if action == "pth":
requirements_output.pth_requirements.append(PthSpecifier(params))
elif action == "packages.txt":
_parse_requirements_definition_file(
os.path.join(topsrcdir, params),
is_thunderbird_packages_txt,
)
elif action == "pypi":
if is_thunderbird_packages_txt:
raise Exception(THUNDERBIRD_PYPI_ERROR)
package_name, version = _parse_package_specifier(params)
requirements_output.pypi_requirements.append(
PypiSpecifier(package_name, version, params)
)
elif action == "pypi-optional":
if is_thunderbird_packages_txt:
raise Exception(THUNDERBIRD_PYPI_ERROR)
if len(params.split(":", maxsplit=1)) != 2:
raise Exception(
"Expected pypi-optional package to have a repercussion "
'description in the format "package:fallback explanation", '
'found "{}"'.format(params)
)
package, repercussion = params.split(":")
package_name, version = _parse_package_specifier(package)
requirements_output.pypi_optional_requirements.append(
PypiOptionalSpecifier(repercussion, package_name, version, package)
)
elif action == "thunderbird-packages.txt":
if is_thunderbird:
_parse_requirements_definition_file(
os.path.join(topsrcdir, params), is_thunderbird_packages_txt=True
)
else:
raise Exception("Unknown requirements definition action: %s" % action)
def _parse_requirements_definition_file(
requirements_path, is_thunderbird_packages_txt
):
"""Parse requirements file into list of requirements"""
assert os.path.isfile(requirements_path)
requirements_output.requirements_paths.append(requirements_path)
with open(requirements_path, "r") as requirements_file:
lines = [line for line in requirements_file]
for line in lines:
_parse_requirements_line(line, is_thunderbird_packages_txt)
_parse_requirements_definition_file(root_requirements_path, False)
def _parse_package_specifier(specifier):
if len(specifier.split("==")) != 2:
raise Exception(
"Expected pypi package version to be pinned in the "
'format "package==version", found "{}"'.format(specifier)
)
return specifier.split("==")

View file

@ -29,21 +29,14 @@ def test_up_to_date_vendor():
# it will use its associated virtualenv and package configuration.
# Since it uses "pip-tools" within, and "pip-tools" needs
# the "Click" library, we need to make them available.
file.write("vendored:third_party/python/Click\n")
file.write("vendored:third_party/python/pip_tools\n")
file.write("pth:third_party/python/Click\n")
file.write("pth:third_party/python/pip_tools\n")
# Copy existing "third_party/python/" vendored files
existing_vendored = os.path.join(topsrcdir, "third_party", "python")
work_vendored = os.path.join(work_dir, "third_party", "python")
shutil.copytree(existing_vendored, work_vendored)
# Copy "mach" module so that `VirtualenvManager` can populate itself.
# This is needed because "topsrcdir" is used in this test both for determining
# import paths and for acting as a "work dir".
existing_mach = os.path.join(topsrcdir, "python", "mach")
work_mach = os.path.join(work_dir, "python", "mach")
shutil.copytree(existing_mach, work_mach)
# Run the vendoring process
vendor = VendorPython(
work_dir, None, Mock(), topobjdir=os.path.join(work_dir, "obj")
@ -60,6 +53,7 @@ def test_up_to_date_vendor():
existing_vendored,
work_vendored,
"--exclude=__pycache__",
"--exclude=*.egg-info",
]
)

View file

@ -193,9 +193,7 @@ class VirtualenvManager(VirtualenvHelper):
if existing_metadata != self._metadata:
return False
if (
env_requirements.pth_requirements or env_requirements.vendored_requirements
) and self.populate_local_paths:
if env_requirements.pth_requirements and self.populate_local_paths:
try:
with open(
os.path.join(self._site_packages_dir(), PTH_FILENAME)
@ -216,15 +214,34 @@ class VirtualenvManager(VirtualenvHelper):
os.path.abspath(os.path.join(self.topsrcdir, pth.path))
)
for pth in env_requirements.pth_requirements
+ env_requirements.vendored_requirements
]
if current_paths != required_paths:
return False
pip = os.path.join(self.bin_path, "pip")
package_result = env_requirements.validate_environment_packages([pip])
if not package_result.has_all_packages:
if (
env_requirements.pypi_requirements
or env_requirements.pypi_optional_requirements
):
pip_json = self._run_pip(
["list", "--format", "json"], stdout=subprocess.PIPE
).stdout
installed_packages = json.loads(pip_json)
installed_packages = {
package["name"]: package["version"] for package in installed_packages
}
for requirement in env_requirements.pypi_requirements:
if (
installed_packages.get(requirement.package_name, None)
!= requirement.version
):
return False
for requirement in env_requirements.pypi_optional_requirements:
installed_version = installed_packages.get(
requirement.package_name, None
)
if installed_version and installed_version != requirement.version:
return False
return True
@ -292,7 +309,15 @@ class VirtualenvManager(VirtualenvHelper):
return self.virtualenv_root
def _requirements(self):
from mach.requirements import MachEnvRequirements
try:
# When `virtualenv.py` is invoked from an existing Mach process,
# import MachEnvRequirements in the expected way.
from mozbuild.requirements import MachEnvRequirements
except ImportError:
# When `virtualenv.py` is invoked standalone, import
# MachEnvRequirements from the adjacent "standalone"
# requirements module.
from requirements import MachEnvRequirements
if not os.path.exists(self._manifest_path):
raise Exception(
@ -306,10 +331,7 @@ class VirtualenvManager(VirtualenvHelper):
os.listdir(thunderbird_dir)
)
return MachEnvRequirements.from_requirements_definition(
self.topsrcdir,
is_thunderbird,
self._virtualenv_name in ("mach", "build"),
self._manifest_path,
self.topsrcdir, is_thunderbird, self._manifest_path
)
def populate(self):
@ -341,10 +363,7 @@ class VirtualenvManager(VirtualenvHelper):
if self.populate_local_paths:
python_lib = distutils.sysconfig.get_python_lib()
with open(os.path.join(python_lib, PTH_FILENAME), "a") as f:
for pth_requirement in (
env_requirements.pth_requirements
+ env_requirements.vendored_requirements
):
for pth_requirement in env_requirements.pth_requirements:
path = os.path.join(self.topsrcdir, pth_requirement.path)
# This path is relative to the .pth file. Using a
# relative path allows the srcdir/objdir combination
@ -353,14 +372,14 @@ class VirtualenvManager(VirtualenvHelper):
f.write("{}\n".format(os.path.relpath(path, python_lib)))
for pypi_requirement in env_requirements.pypi_requirements:
self.install_pip_package(str(pypi_requirement.requirement))
self.install_pip_package(pypi_requirement.full_specifier)
for requirement in env_requirements.pypi_optional_requirements:
try:
self.install_pip_package(str(requirement.requirement))
self.install_pip_package(requirement.full_specifier)
except subprocess.CalledProcessError:
print(
f"Could not install {requirement.requirement.name}, so "
f"Could not install {requirement.package_name}, so "
f"{requirement.repercussion}. Continuing."
)
@ -584,12 +603,6 @@ if __name__ == "__main__":
populate = False
opts = parser.parse_args(sys.argv[1:])
# We want to be able to import the "mach.requirements" module.
sys.path.append(os.path.join(opts.topsrcdir, "python", "mach"))
# Virtualenv logic needs access to the vendored "packaging" library.
sys.path.append(os.path.join(opts.topsrcdir, "third_party", "python", "pyparsing"))
sys.path.append(os.path.join(opts.topsrcdir, "third_party", "python", "packaging"))
manager = VirtualenvManager(
opts.topsrcdir,
opts.virtualenvs_dir,

View file

@ -1,101 +0,0 @@
Metadata-Version: 1.2
Name: MarkupSafe
Version: 1.1.1
Summary: Safely add untrusted strings to HTML/XML markup.
Home-page: https://palletsprojects.com/p/markupsafe/
Author: Armin Ronacher
Author-email: armin.ronacher@active-4.com
Maintainer: The Pallets Team
Maintainer-email: contact@palletsprojects.com
License: BSD-3-Clause
Project-URL: Documentation, https://markupsafe.palletsprojects.com/
Project-URL: Code, https://github.com/pallets/markupsafe
Project-URL: Issue tracker, https://github.com/pallets/markupsafe/issues
Description: MarkupSafe
==========
MarkupSafe implements a text object that escapes characters so it is
safe to use in HTML and XML. Characters that have special meanings are
replaced so that they display as the actual characters. This mitigates
injection attacks, meaning untrusted user input can safely be displayed
on a page.
Installing
----------
Install and update using `pip`_:
.. code-block:: text
pip install -U MarkupSafe
.. _pip: https://pip.pypa.io/en/stable/quickstart/
Examples
--------
.. code-block:: pycon
>>> from markupsafe import Markup, escape
>>> # escape replaces special characters and wraps in Markup
>>> escape('<script>alert(document.cookie);</script>')
Markup(u'&lt;script&gt;alert(document.cookie);&lt;/script&gt;')
>>> # wrap in Markup to mark text "safe" and prevent escaping
>>> Markup('<strong>Hello</strong>')
Markup('<strong>hello</strong>')
>>> escape(Markup('<strong>Hello</strong>'))
Markup('<strong>hello</strong>')
>>> # Markup is a text subclass (str on Python 3, unicode on Python 2)
>>> # methods and operators escape their arguments
>>> template = Markup("Hello <em>%s</em>")
>>> template % '"World"'
Markup('Hello <em>&#34;World&#34;</em>')
Donate
------
The Pallets organization develops and supports MarkupSafe and other
libraries that use it. In order to grow the community of contributors
and users, and allow the maintainers to devote more time to the
projects, `please donate today`_.
.. _please donate today: https://palletsprojects.com/donate
Links
-----
* Website: https://palletsprojects.com/p/markupsafe/
* Documentation: https://markupsafe.palletsprojects.com/
* License: `BSD-3-Clause <https://github.com/pallets/markupsafe/blob/master/LICENSE.rst>`_
* Releases: https://pypi.org/project/MarkupSafe/
* Code: https://github.com/pallets/markupsafe
* Issue tracker: https://github.com/pallets/markupsafe/issues
* Test status:
* Linux, Mac: https://travis-ci.org/pallets/markupsafe
* Windows: https://ci.appveyor.com/project/pallets/markupsafe
* Test coverage: https://codecov.io/gh/pallets/markupsafe
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Environment :: Web Environment
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: BSD License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Text Processing :: Markup :: HTML
Requires-Python: >=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*

View file

@ -1,31 +0,0 @@
CHANGES.rst
LICENSE.rst
MANIFEST.in
README.rst
setup.cfg
setup.py
tox.ini
docs/Makefile
docs/changes.rst
docs/conf.py
docs/escaping.rst
docs/formatting.rst
docs/html.rst
docs/index.rst
docs/license.rst
docs/make.bat
docs/requirements.txt
src/MarkupSafe.egg-info/PKG-INFO
src/MarkupSafe.egg-info/SOURCES.txt
src/MarkupSafe.egg-info/dependency_links.txt
src/MarkupSafe.egg-info/top_level.txt
src/markupsafe/__init__.py
src/markupsafe/_compat.py
src/markupsafe/_constants.py
src/markupsafe/_native.py
src/markupsafe/_speedups.c
tests/conftest.py
tests/test_escape.py
tests/test_exception_custom_html.py
tests/test_leak.py
tests/test_markupsafe.py

View file

@ -1 +0,0 @@
markupsafe

View file

@ -1,44 +0,0 @@
Metadata-Version: 1.2
Name: PyYAML
Version: 5.4.1
Summary: YAML parser and emitter for Python
Home-page: https://pyyaml.org/
Author: Kirill Simonov
Author-email: xi@resolvent.net
License: MIT
Download-URL: https://pypi.org/project/PyYAML/
Project-URL: Bug Tracker, https://github.com/yaml/pyyaml/issues
Project-URL: CI, https://github.com/yaml/pyyaml/actions
Project-URL: Documentation, https://pyyaml.org/wiki/PyYAMLDocumentation
Project-URL: Mailing lists, http://lists.sourceforge.net/lists/listinfo/yaml-core
Project-URL: Source Code, https://github.com/yaml/pyyaml
Description: YAML is a data serialization format designed for human readability
and interaction with scripting languages. PyYAML is a YAML parser
and emitter for Python.
PyYAML features a complete YAML 1.1 parser, Unicode support, pickle
support, capable extension API, and sensible error messages. PyYAML
supports standard YAML tags and provides Python-specific tags that
allow to represent an arbitrary Python object.
PyYAML is applicable for a broad range of tasks from complex
configuration files to object serialization and persistence.
Platform: Any
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Cython
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Text Processing :: Markup
Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*

View file

@ -1,670 +0,0 @@
CHANGES
LICENSE
MANIFEST.in
Makefile
README
pyproject.toml
setup.cfg
setup.py
examples/pygments-lexer/example.yaml
examples/pygments-lexer/yaml.py
examples/yaml-highlight/yaml_hl.cfg
examples/yaml-highlight/yaml_hl.py
lib/_yaml/__init__.py
lib/yaml/__init__.py
lib/yaml/composer.py
lib/yaml/constructor.py
lib/yaml/cyaml.py
lib/yaml/dumper.py
lib/yaml/emitter.py
lib/yaml/error.py
lib/yaml/events.py
lib/yaml/loader.py
lib/yaml/nodes.py
lib/yaml/parser.py
lib/yaml/reader.py
lib/yaml/representer.py
lib/yaml/resolver.py
lib/yaml/scanner.py
lib/yaml/serializer.py
lib/yaml/tokens.py
lib3/PyYAML.egg-info/PKG-INFO
lib3/PyYAML.egg-info/SOURCES.txt
lib3/PyYAML.egg-info/dependency_links.txt
lib3/PyYAML.egg-info/top_level.txt
lib3/_yaml/__init__.py
lib3/yaml/__init__.py
lib3/yaml/composer.py
lib3/yaml/constructor.py
lib3/yaml/cyaml.py
lib3/yaml/dumper.py
lib3/yaml/emitter.py
lib3/yaml/error.py
lib3/yaml/events.py
lib3/yaml/loader.py
lib3/yaml/nodes.py
lib3/yaml/parser.py
lib3/yaml/reader.py
lib3/yaml/representer.py
lib3/yaml/resolver.py
lib3/yaml/scanner.py
lib3/yaml/serializer.py
lib3/yaml/tokens.py
tests/data/a-nasty-libyaml-bug.loader-error
tests/data/aliases-cdumper-bug.code
tests/data/aliases.events
tests/data/bool.data
tests/data/bool.detect
tests/data/construct-binary-py2.code
tests/data/construct-binary-py2.data
tests/data/construct-binary-py3.code
tests/data/construct-binary-py3.data
tests/data/construct-bool.code
tests/data/construct-bool.data
tests/data/construct-custom.code
tests/data/construct-custom.data
tests/data/construct-float.code
tests/data/construct-float.data
tests/data/construct-int.code
tests/data/construct-int.data
tests/data/construct-map.code
tests/data/construct-map.data
tests/data/construct-merge.code
tests/data/construct-merge.data
tests/data/construct-null.code
tests/data/construct-null.data
tests/data/construct-omap.code
tests/data/construct-omap.data
tests/data/construct-pairs.code
tests/data/construct-pairs.data
tests/data/construct-python-bool.code
tests/data/construct-python-bool.data
tests/data/construct-python-bytes-py3.code
tests/data/construct-python-bytes-py3.data
tests/data/construct-python-complex.code
tests/data/construct-python-complex.data
tests/data/construct-python-float.code
tests/data/construct-python-float.data
tests/data/construct-python-int.code
tests/data/construct-python-int.data
tests/data/construct-python-long-short-py2.code
tests/data/construct-python-long-short-py2.data
tests/data/construct-python-long-short-py3.code
tests/data/construct-python-long-short-py3.data
tests/data/construct-python-name-module.code
tests/data/construct-python-name-module.data
tests/data/construct-python-none.code
tests/data/construct-python-none.data
tests/data/construct-python-object.code
tests/data/construct-python-object.data
tests/data/construct-python-str-ascii.code
tests/data/construct-python-str-ascii.data
tests/data/construct-python-str-utf8-py2.code
tests/data/construct-python-str-utf8-py2.data
tests/data/construct-python-str-utf8-py3.code
tests/data/construct-python-str-utf8-py3.data
tests/data/construct-python-tuple-list-dict.code
tests/data/construct-python-tuple-list-dict.data
tests/data/construct-python-unicode-ascii-py2.code
tests/data/construct-python-unicode-ascii-py2.data
tests/data/construct-python-unicode-ascii-py3.code
tests/data/construct-python-unicode-ascii-py3.data
tests/data/construct-python-unicode-utf8-py2.code
tests/data/construct-python-unicode-utf8-py2.data
tests/data/construct-python-unicode-utf8-py3.code
tests/data/construct-python-unicode-utf8-py3.data
tests/data/construct-seq.code
tests/data/construct-seq.data
tests/data/construct-set.code
tests/data/construct-set.data
tests/data/construct-str-ascii.code
tests/data/construct-str-ascii.data
tests/data/construct-str-utf8-py2.code
tests/data/construct-str-utf8-py2.data
tests/data/construct-str-utf8-py3.code
tests/data/construct-str-utf8-py3.data
tests/data/construct-str.code
tests/data/construct-str.data
tests/data/construct-timestamp.code
tests/data/construct-timestamp.data
tests/data/construct-value.code
tests/data/construct-value.data
tests/data/document-separator-in-quoted-scalar.loader-error
tests/data/documents.events
tests/data/duplicate-anchor-1.loader-error
tests/data/duplicate-anchor-2.loader-error
tests/data/duplicate-key.former-loader-error.code
tests/data/duplicate-key.former-loader-error.data
tests/data/duplicate-mapping-key.former-loader-error.code
tests/data/duplicate-mapping-key.former-loader-error.data
tests/data/duplicate-merge-key.former-loader-error.code
tests/data/duplicate-merge-key.former-loader-error.data
tests/data/duplicate-tag-directive.loader-error
tests/data/duplicate-value-key.former-loader-error.code
tests/data/duplicate-value-key.former-loader-error.data
tests/data/duplicate-yaml-directive.loader-error
tests/data/emit-block-scalar-in-simple-key-context-bug.canonical
tests/data/emit-block-scalar-in-simple-key-context-bug.data
tests/data/emitting-unacceptable-unicode-character-bug-py3.code
tests/data/emitting-unacceptable-unicode-character-bug-py3.data
tests/data/emitting-unacceptable-unicode-character-bug-py3.skip-ext
tests/data/emitting-unacceptable-unicode-character-bug.code
tests/data/emitting-unacceptable-unicode-character-bug.data
tests/data/emitting-unacceptable-unicode-character-bug.skip-ext
tests/data/emoticons.unicode
tests/data/emoticons2.unicode
tests/data/empty-anchor.emitter-error
tests/data/empty-document-bug.canonical
tests/data/empty-document-bug.data
tests/data/empty-document-bug.empty
tests/data/empty-documents.single-loader-error
tests/data/empty-python-module.loader-error
tests/data/empty-python-name.loader-error
tests/data/empty-tag-handle.emitter-error
tests/data/empty-tag-prefix.emitter-error
tests/data/empty-tag.emitter-error
tests/data/expected-document-end.emitter-error
tests/data/expected-document-start.emitter-error
tests/data/expected-mapping.loader-error
tests/data/expected-node-1.emitter-error
tests/data/expected-node-2.emitter-error
tests/data/expected-nothing.emitter-error
tests/data/expected-scalar.loader-error
tests/data/expected-sequence.loader-error
tests/data/expected-stream-start.emitter-error
tests/data/explicit-document.single-loader-error
tests/data/fetch-complex-value-bug.loader-error
tests/data/float-representer-2.3-bug.code
tests/data/float-representer-2.3-bug.data
tests/data/float.data
tests/data/float.detect
tests/data/forbidden-entry.loader-error
tests/data/forbidden-key.loader-error
tests/data/forbidden-value.loader-error
tests/data/implicit-document.single-loader-error
tests/data/int.data
tests/data/int.detect
tests/data/invalid-anchor-1.loader-error
tests/data/invalid-anchor-2.loader-error
tests/data/invalid-anchor.emitter-error
tests/data/invalid-base64-data-2.loader-error
tests/data/invalid-base64-data.loader-error
tests/data/invalid-block-scalar-indicator.loader-error
tests/data/invalid-character.loader-error
tests/data/invalid-character.stream-error
tests/data/invalid-directive-line.loader-error
tests/data/invalid-directive-name-1.loader-error
tests/data/invalid-directive-name-2.loader-error
tests/data/invalid-escape-character.loader-error
tests/data/invalid-escape-numbers.loader-error
tests/data/invalid-indentation-indicator-1.loader-error
tests/data/invalid-indentation-indicator-2.loader-error
tests/data/invalid-item-without-trailing-break.loader-error
tests/data/invalid-merge-1.loader-error
tests/data/invalid-merge-2.loader-error
tests/data/invalid-omap-1.loader-error
tests/data/invalid-omap-2.loader-error
tests/data/invalid-omap-3.loader-error
tests/data/invalid-pairs-1.loader-error
tests/data/invalid-pairs-2.loader-error
tests/data/invalid-pairs-3.loader-error
tests/data/invalid-python-bytes-2-py3.loader-error
tests/data/invalid-python-bytes-py3.loader-error
tests/data/invalid-python-module-kind.loader-error
tests/data/invalid-python-module-value.loader-error
tests/data/invalid-python-module.loader-error
tests/data/invalid-python-name-kind.loader-error
tests/data/invalid-python-name-module.loader-error
tests/data/invalid-python-name-object.loader-error
tests/data/invalid-python-name-value.loader-error
tests/data/invalid-simple-key.loader-error
tests/data/invalid-single-quote-bug.code
tests/data/invalid-single-quote-bug.data
tests/data/invalid-starting-character.loader-error
tests/data/invalid-tag-1.loader-error
tests/data/invalid-tag-2.loader-error
tests/data/invalid-tag-directive-handle.loader-error
tests/data/invalid-tag-directive-prefix.loader-error
tests/data/invalid-tag-handle-1.emitter-error
tests/data/invalid-tag-handle-1.loader-error
tests/data/invalid-tag-handle-2.emitter-error
tests/data/invalid-tag-handle-2.loader-error
tests/data/invalid-uri-escapes-1.loader-error
tests/data/invalid-uri-escapes-2.loader-error
tests/data/invalid-uri-escapes-3.loader-error
tests/data/invalid-uri.loader-error
tests/data/invalid-utf8-byte.loader-error
tests/data/invalid-utf8-byte.stream-error
tests/data/invalid-yaml-directive-version-1.loader-error
tests/data/invalid-yaml-directive-version-2.loader-error
tests/data/invalid-yaml-directive-version-3.loader-error
tests/data/invalid-yaml-directive-version-4.loader-error
tests/data/invalid-yaml-directive-version-5.loader-error
tests/data/invalid-yaml-directive-version-6.loader-error
tests/data/invalid-yaml-version.loader-error
tests/data/latin.unicode
tests/data/mapping.sort
tests/data/mapping.sorted
tests/data/mappings.events
tests/data/merge.data
tests/data/merge.detect
tests/data/more-floats.code
tests/data/more-floats.data
tests/data/multi-constructor.code
tests/data/multi-constructor.multi
tests/data/myfullloader.subclass_blacklist
tests/data/negative-float-bug.code
tests/data/negative-float-bug.data
tests/data/no-alias-anchor.emitter-error
tests/data/no-alias-anchor.skip-ext
tests/data/no-block-collection-end.loader-error
tests/data/no-block-mapping-end-2.loader-error
tests/data/no-block-mapping-end.loader-error
tests/data/no-document-start.loader-error
tests/data/no-flow-mapping-end.loader-error
tests/data/no-flow-sequence-end.loader-error
tests/data/no-node-1.loader-error
tests/data/no-node-2.loader-error
tests/data/no-tag.emitter-error
tests/data/null.data
tests/data/null.detect
tests/data/odd-utf16.stream-error
tests/data/overwrite-state-new-constructor.loader-error
tests/data/recursive-anchor.former-loader-error
tests/data/recursive-dict.recursive
tests/data/recursive-list.recursive
tests/data/recursive-set.recursive
tests/data/recursive-state.recursive
tests/data/recursive-tuple.recursive
tests/data/recursive.former-dumper-error
tests/data/remove-possible-simple-key-bug.loader-error
tests/data/resolver.data
tests/data/resolver.path
tests/data/run-parser-crash-bug.data
tests/data/scalars.events
tests/data/scan-document-end-bug.canonical
tests/data/scan-document-end-bug.data
tests/data/scan-line-break-bug.canonical
tests/data/scan-line-break-bug.data
tests/data/sequences.events
tests/data/serializer-is-already-opened.dumper-error
tests/data/serializer-is-closed-1.dumper-error
tests/data/serializer-is-closed-2.dumper-error
tests/data/serializer-is-not-opened-1.dumper-error
tests/data/serializer-is-not-opened-2.dumper-error
tests/data/single-dot-is-not-float-bug.code
tests/data/single-dot-is-not-float-bug.data
tests/data/sloppy-indentation.canonical
tests/data/sloppy-indentation.data
tests/data/spec-02-01.data
tests/data/spec-02-01.structure
tests/data/spec-02-01.tokens
tests/data/spec-02-02.data
tests/data/spec-02-02.structure
tests/data/spec-02-02.tokens
tests/data/spec-02-03.data
tests/data/spec-02-03.structure
tests/data/spec-02-03.tokens
tests/data/spec-02-04.data
tests/data/spec-02-04.structure
tests/data/spec-02-04.tokens
tests/data/spec-02-05.data
tests/data/spec-02-05.structure
tests/data/spec-02-05.tokens
tests/data/spec-02-06.data
tests/data/spec-02-06.structure
tests/data/spec-02-06.tokens
tests/data/spec-02-07.data
tests/data/spec-02-07.structure
tests/data/spec-02-07.tokens
tests/data/spec-02-08.data
tests/data/spec-02-08.structure
tests/data/spec-02-08.tokens
tests/data/spec-02-09.data
tests/data/spec-02-09.structure
tests/data/spec-02-09.tokens
tests/data/spec-02-10.data
tests/data/spec-02-10.structure
tests/data/spec-02-10.tokens
tests/data/spec-02-11.data
tests/data/spec-02-11.structure
tests/data/spec-02-11.tokens
tests/data/spec-02-12.data
tests/data/spec-02-12.structure
tests/data/spec-02-12.tokens
tests/data/spec-02-13.data
tests/data/spec-02-13.structure
tests/data/spec-02-13.tokens
tests/data/spec-02-14.data
tests/data/spec-02-14.structure
tests/data/spec-02-14.tokens
tests/data/spec-02-15.data
tests/data/spec-02-15.structure
tests/data/spec-02-15.tokens
tests/data/spec-02-16.data
tests/data/spec-02-16.structure
tests/data/spec-02-16.tokens
tests/data/spec-02-17.data
tests/data/spec-02-17.structure
tests/data/spec-02-17.tokens
tests/data/spec-02-18.data
tests/data/spec-02-18.structure
tests/data/spec-02-18.tokens
tests/data/spec-02-19.data
tests/data/spec-02-19.structure
tests/data/spec-02-19.tokens
tests/data/spec-02-20.data
tests/data/spec-02-20.structure
tests/data/spec-02-20.tokens
tests/data/spec-02-21.data
tests/data/spec-02-21.structure
tests/data/spec-02-21.tokens
tests/data/spec-02-22.data
tests/data/spec-02-22.structure
tests/data/spec-02-22.tokens
tests/data/spec-02-23.data
tests/data/spec-02-23.structure
tests/data/spec-02-23.tokens
tests/data/spec-02-24.data
tests/data/spec-02-24.structure
tests/data/spec-02-24.tokens
tests/data/spec-02-25.data
tests/data/spec-02-25.structure
tests/data/spec-02-25.tokens
tests/data/spec-02-26.data
tests/data/spec-02-26.structure
tests/data/spec-02-26.tokens
tests/data/spec-02-27.data
tests/data/spec-02-27.structure
tests/data/spec-02-27.tokens
tests/data/spec-02-28.data
tests/data/spec-02-28.structure
tests/data/spec-02-28.tokens
tests/data/spec-05-01-utf16be.data
tests/data/spec-05-01-utf16be.empty
tests/data/spec-05-01-utf16le.data
tests/data/spec-05-01-utf16le.empty
tests/data/spec-05-01-utf8.data
tests/data/spec-05-01-utf8.empty
tests/data/spec-05-02-utf16be.data
tests/data/spec-05-02-utf16be.error
tests/data/spec-05-02-utf16le.data
tests/data/spec-05-02-utf16le.error
tests/data/spec-05-02-utf8.data
tests/data/spec-05-02-utf8.error
tests/data/spec-05-03.canonical
tests/data/spec-05-03.data
tests/data/spec-05-04.canonical
tests/data/spec-05-04.data
tests/data/spec-05-05.data
tests/data/spec-05-05.empty
tests/data/spec-05-06.canonical
tests/data/spec-05-06.data
tests/data/spec-05-07.canonical
tests/data/spec-05-07.data
tests/data/spec-05-08.canonical
tests/data/spec-05-08.data
tests/data/spec-05-09.canonical
tests/data/spec-05-09.data
tests/data/spec-05-10.data
tests/data/spec-05-10.error
tests/data/spec-05-11.canonical
tests/data/spec-05-11.data
tests/data/spec-05-12.data
tests/data/spec-05-12.error
tests/data/spec-05-13.canonical
tests/data/spec-05-13.data
tests/data/spec-05-14.canonical
tests/data/spec-05-14.data
tests/data/spec-05-15.data
tests/data/spec-05-15.error
tests/data/spec-06-01.canonical
tests/data/spec-06-01.data
tests/data/spec-06-02.data
tests/data/spec-06-02.empty
tests/data/spec-06-03.canonical
tests/data/spec-06-03.data
tests/data/spec-06-04.canonical
tests/data/spec-06-04.data
tests/data/spec-06-05.canonical
tests/data/spec-06-05.data
tests/data/spec-06-06.canonical
tests/data/spec-06-06.data
tests/data/spec-06-07.canonical
tests/data/spec-06-07.data
tests/data/spec-06-08.canonical
tests/data/spec-06-08.data
tests/data/spec-07-01.canonical
tests/data/spec-07-01.data
tests/data/spec-07-01.skip-ext
tests/data/spec-07-02.canonical
tests/data/spec-07-02.data
tests/data/spec-07-02.skip-ext
tests/data/spec-07-03.data
tests/data/spec-07-03.error
tests/data/spec-07-04.canonical
tests/data/spec-07-04.data
tests/data/spec-07-05.data
tests/data/spec-07-05.error
tests/data/spec-07-06.canonical
tests/data/spec-07-06.data
tests/data/spec-07-07a.canonical
tests/data/spec-07-07a.data
tests/data/spec-07-07b.canonical
tests/data/spec-07-07b.data
tests/data/spec-07-08.canonical
tests/data/spec-07-08.data
tests/data/spec-07-09.canonical
tests/data/spec-07-09.data
tests/data/spec-07-10.canonical
tests/data/spec-07-10.data
tests/data/spec-07-11.data
tests/data/spec-07-11.empty
tests/data/spec-07-12a.canonical
tests/data/spec-07-12a.data
tests/data/spec-07-12b.canonical
tests/data/spec-07-12b.data
tests/data/spec-07-13.canonical
tests/data/spec-07-13.data
tests/data/spec-08-01.canonical
tests/data/spec-08-01.data
tests/data/spec-08-02.canonical
tests/data/spec-08-02.data
tests/data/spec-08-03.canonical
tests/data/spec-08-03.data
tests/data/spec-08-04.data
tests/data/spec-08-04.error
tests/data/spec-08-05.canonical
tests/data/spec-08-05.data
tests/data/spec-08-06.data
tests/data/spec-08-06.error
tests/data/spec-08-07.canonical
tests/data/spec-08-07.data
tests/data/spec-08-08.canonical
tests/data/spec-08-08.data
tests/data/spec-08-09.canonical
tests/data/spec-08-09.data
tests/data/spec-08-10.canonical
tests/data/spec-08-10.data
tests/data/spec-08-11.canonical
tests/data/spec-08-11.data
tests/data/spec-08-12.canonical
tests/data/spec-08-12.data
tests/data/spec-08-13.canonical
tests/data/spec-08-13.data
tests/data/spec-08-13.skip-ext
tests/data/spec-08-14.canonical
tests/data/spec-08-14.data
tests/data/spec-08-15.canonical
tests/data/spec-08-15.data
tests/data/spec-09-01.canonical
tests/data/spec-09-01.data
tests/data/spec-09-02.canonical
tests/data/spec-09-02.data
tests/data/spec-09-03.canonical
tests/data/spec-09-03.data
tests/data/spec-09-04.canonical
tests/data/spec-09-04.data
tests/data/spec-09-05.canonical
tests/data/spec-09-05.data
tests/data/spec-09-06.canonical
tests/data/spec-09-06.data
tests/data/spec-09-07.canonical
tests/data/spec-09-07.data
tests/data/spec-09-08.canonical
tests/data/spec-09-08.data
tests/data/spec-09-09.canonical
tests/data/spec-09-09.data
tests/data/spec-09-10.canonical
tests/data/spec-09-10.data
tests/data/spec-09-11.canonical
tests/data/spec-09-11.data
tests/data/spec-09-12.canonical
tests/data/spec-09-12.data
tests/data/spec-09-13.canonical
tests/data/spec-09-13.data
tests/data/spec-09-14.data
tests/data/spec-09-14.error
tests/data/spec-09-15.canonical
tests/data/spec-09-15.data
tests/data/spec-09-16.canonical
tests/data/spec-09-16.data
tests/data/spec-09-17.canonical
tests/data/spec-09-17.data
tests/data/spec-09-18.canonical
tests/data/spec-09-18.data
tests/data/spec-09-19.canonical
tests/data/spec-09-19.data
tests/data/spec-09-20.canonical
tests/data/spec-09-20.data
tests/data/spec-09-20.skip-ext
tests/data/spec-09-21.data
tests/data/spec-09-21.error
tests/data/spec-09-22.canonical
tests/data/spec-09-22.data
tests/data/spec-09-23.canonical
tests/data/spec-09-23.data
tests/data/spec-09-24.canonical
tests/data/spec-09-24.data
tests/data/spec-09-25.canonical
tests/data/spec-09-25.data
tests/data/spec-09-26.canonical
tests/data/spec-09-26.data
tests/data/spec-09-27.canonical
tests/data/spec-09-27.data
tests/data/spec-09-28.canonical
tests/data/spec-09-28.data
tests/data/spec-09-29.canonical
tests/data/spec-09-29.data
tests/data/spec-09-30.canonical
tests/data/spec-09-30.data
tests/data/spec-09-31.canonical
tests/data/spec-09-31.data
tests/data/spec-09-32.canonical
tests/data/spec-09-32.data
tests/data/spec-09-33.canonical
tests/data/spec-09-33.data
tests/data/spec-10-01.canonical
tests/data/spec-10-01.data
tests/data/spec-10-02.canonical
tests/data/spec-10-02.data
tests/data/spec-10-03.canonical
tests/data/spec-10-03.data
tests/data/spec-10-04.canonical
tests/data/spec-10-04.data
tests/data/spec-10-05.canonical
tests/data/spec-10-05.data
tests/data/spec-10-06.canonical
tests/data/spec-10-06.data
tests/data/spec-10-07.canonical
tests/data/spec-10-07.data
tests/data/spec-10-08.data
tests/data/spec-10-08.error
tests/data/spec-10-09.canonical
tests/data/spec-10-09.data
tests/data/spec-10-10.canonical
tests/data/spec-10-10.data
tests/data/spec-10-11.canonical
tests/data/spec-10-11.data
tests/data/spec-10-12.canonical
tests/data/spec-10-12.data
tests/data/spec-10-13.canonical
tests/data/spec-10-13.data
tests/data/spec-10-14.canonical
tests/data/spec-10-14.data
tests/data/spec-10-15.canonical
tests/data/spec-10-15.data
tests/data/str.data
tests/data/str.detect
tests/data/tags.events
tests/data/test_mark.marks
tests/data/timestamp-bugs.code
tests/data/timestamp-bugs.data
tests/data/timestamp.data
tests/data/timestamp.detect
tests/data/unacceptable-key.loader-error
tests/data/unclosed-bracket.loader-error
tests/data/unclosed-quoted-scalar.loader-error
tests/data/undefined-anchor.loader-error
tests/data/undefined-constructor.loader-error
tests/data/undefined-tag-handle.loader-error
tests/data/unknown.dumper-error
tests/data/unsupported-version.emitter-error
tests/data/utf16be.code
tests/data/utf16be.data
tests/data/utf16le.code
tests/data/utf16le.data
tests/data/utf8-implicit.code
tests/data/utf8-implicit.data
tests/data/utf8.code
tests/data/utf8.data
tests/data/value.data
tests/data/value.detect
tests/data/yaml.data
tests/data/yaml.detect
tests/lib/canonical.py
tests/lib/test_all.py
tests/lib/test_appliance.py
tests/lib/test_build.py
tests/lib/test_build_ext.py
tests/lib/test_canonical.py
tests/lib/test_constructor.py
tests/lib/test_emitter.py
tests/lib/test_errors.py
tests/lib/test_input_output.py
tests/lib/test_mark.py
tests/lib/test_multi_constructor.py
tests/lib/test_reader.py
tests/lib/test_recursive.py
tests/lib/test_representer.py
tests/lib/test_resolver.py
tests/lib/test_sort_keys.py
tests/lib/test_structure.py
tests/lib/test_tokens.py
tests/lib/test_yaml.py
tests/lib/test_yaml_ext.py
tests/lib3/canonical.py
tests/lib3/test_all.py
tests/lib3/test_appliance.py
tests/lib3/test_build.py
tests/lib3/test_build_ext.py
tests/lib3/test_canonical.py
tests/lib3/test_constructor.py
tests/lib3/test_emitter.py
tests/lib3/test_errors.py
tests/lib3/test_input_output.py
tests/lib3/test_mark.py
tests/lib3/test_multi_constructor.py
tests/lib3/test_reader.py
tests/lib3/test_recursive.py
tests/lib3/test_representer.py
tests/lib3/test_resolver.py
tests/lib3/test_sort_keys.py
tests/lib3/test_structure.py
tests/lib3/test_tokens.py
tests/lib3/test_yaml.py
tests/lib3/test_yaml_ext.py
yaml/__init__.pxd
yaml/_yaml.h
yaml/_yaml.pxd
yaml/_yaml.pyx

View file

@ -1,2 +0,0 @@
_yaml
yaml

View file

@ -1,966 +0,0 @@
Metadata-Version: 2.1
Name: aiohttp
Version: 3.7.4.post0
Summary: Async http client/server framework (asyncio)
Home-page: https://github.com/aio-libs/aiohttp
Author: Nikolay Kim
Author-email: fafhrd91@gmail.com
Maintainer: Nikolay Kim <fafhrd91@gmail.com>, Andrew Svetlov <andrew.svetlov@gmail.com>
Maintainer-email: aio-libs@googlegroups.com
License: Apache 2
Project-URL: Chat: Gitter, https://gitter.im/aio-libs/Lobby
Project-URL: CI: Azure Pipelines, https://dev.azure.com/aio-libs/aiohttp/_build
Project-URL: Coverage: codecov, https://codecov.io/github/aio-libs/aiohttp
Project-URL: Docs: RTD, https://docs.aiohttp.org
Project-URL: GitHub: issues, https://github.com/aio-libs/aiohttp/issues
Project-URL: GitHub: repo, https://github.com/aio-libs/aiohttp
Description: ==================================
Async http client/server framework
==================================
.. image:: https://raw.githubusercontent.com/aio-libs/aiohttp/master/docs/_static/aiohttp-icon-128x128.png
:height: 64px
:width: 64px
:alt: aiohttp logo
|
.. image:: https://github.com/aio-libs/aiohttp/workflows/CI/badge.svg
:target: https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI
:alt: GitHub Actions status for master branch
.. image:: https://codecov.io/gh/aio-libs/aiohttp/branch/master/graph/badge.svg
:target: https://codecov.io/gh/aio-libs/aiohttp
:alt: codecov.io status for master branch
.. image:: https://badge.fury.io/py/aiohttp.svg
:target: https://pypi.org/project/aiohttp
:alt: Latest PyPI package version
.. image:: https://readthedocs.org/projects/aiohttp/badge/?version=latest
:target: https://docs.aiohttp.org/
:alt: Latest Read The Docs
.. image:: https://img.shields.io/discourse/status?server=https%3A%2F%2Faio-libs.discourse.group
:target: https://aio-libs.discourse.group
:alt: Discourse status
.. image:: https://badges.gitter.im/Join%20Chat.svg
:target: https://gitter.im/aio-libs/Lobby
:alt: Chat on Gitter
Key Features
============
- Supports both client and server side of HTTP protocol.
- Supports both client and server Web-Sockets out-of-the-box and avoids
Callback Hell.
- Provides Web-server with middlewares and plugable routing.
Getting started
===============
Client
------
To get something from the web:
.. code-block:: python
import aiohttp
import asyncio
async def main():
async with aiohttp.ClientSession() as session:
async with session.get('http://python.org') as response:
print("Status:", response.status)
print("Content-type:", response.headers['content-type'])
html = await response.text()
print("Body:", html[:15], "...")
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
This prints:
.. code-block::
Status: 200
Content-type: text/html; charset=utf-8
Body: <!doctype html> ...
Coming from `requests <https://requests.readthedocs.io/>`_ ? Read `why we need so many lines <https://aiohttp.readthedocs.io/en/latest/http_request_lifecycle.html>`_.
Server
------
An example using a simple server:
.. code-block:: python
# examples/server_simple.py
from aiohttp import web
async def handle(request):
name = request.match_info.get('name', "Anonymous")
text = "Hello, " + name
return web.Response(text=text)
async def wshandle(request):
ws = web.WebSocketResponse()
await ws.prepare(request)
async for msg in ws:
if msg.type == web.WSMsgType.text:
await ws.send_str("Hello, {}".format(msg.data))
elif msg.type == web.WSMsgType.binary:
await ws.send_bytes(msg.data)
elif msg.type == web.WSMsgType.close:
break
return ws
app = web.Application()
app.add_routes([web.get('/', handle),
web.get('/echo', wshandle),
web.get('/{name}', handle)])
if __name__ == '__main__':
web.run_app(app)
Documentation
=============
https://aiohttp.readthedocs.io/
Demos
=====
https://github.com/aio-libs/aiohttp-demos
External links
==============
* `Third party libraries
<http://aiohttp.readthedocs.io/en/latest/third_party.html>`_
* `Built with aiohttp
<http://aiohttp.readthedocs.io/en/latest/built_with.html>`_
* `Powered by aiohttp
<http://aiohttp.readthedocs.io/en/latest/powered_by.html>`_
Feel free to make a Pull Request for adding your link to these pages!
Communication channels
======================
*aio-libs discourse group*: https://aio-libs.discourse.group
*gitter chat* https://gitter.im/aio-libs/Lobby
We support `Stack Overflow
<https://stackoverflow.com/questions/tagged/aiohttp>`_.
Please add *aiohttp* tag to your question there.
Requirements
============
- Python >= 3.6
- async-timeout_
- attrs_
- chardet_
- multidict_
- yarl_
Optionally you may install the cChardet_ and aiodns_ libraries (highly
recommended for sake of speed).
.. _chardet: https://pypi.python.org/pypi/chardet
.. _aiodns: https://pypi.python.org/pypi/aiodns
.. _attrs: https://github.com/python-attrs/attrs
.. _multidict: https://pypi.python.org/pypi/multidict
.. _yarl: https://pypi.python.org/pypi/yarl
.. _async-timeout: https://pypi.python.org/pypi/async_timeout
.. _cChardet: https://pypi.python.org/pypi/cchardet
License
=======
``aiohttp`` is offered under the Apache 2 license.
Keepsafe
========
The aiohttp community would like to thank Keepsafe
(https://www.getkeepsafe.com) for its support in the early days of
the project.
Source code
===========
The latest developer version is available in a GitHub repository:
https://github.com/aio-libs/aiohttp
Benchmarks
==========
If you are interested in efficiency, the AsyncIO community maintains a
list of benchmarks on the official wiki:
https://github.com/python/asyncio/wiki/Benchmarks
=========
Changelog
=========
..
You should *NOT* be adding new change log entries to this file, this
file is managed by towncrier. You *may* edit previous change logs to
fix problems like typo corrections or such.
To add a new change log entry, please see
https://pip.pypa.io/en/latest/development/#adding-a-news-entry
we named the news folder "changes".
WARNING: Don't drop the next directive!
.. towncrier release notes start
3.7.4.post0 (2021-03-06)
========================
Misc
----
- Bumped upper bound of the ``chardet`` runtime dependency
to allow their v4.0 version stream.
`#5366 <https://github.com/aio-libs/aiohttp/issues/5366>`_
----
3.7.4 (2021-02-25)
==================
Bugfixes
--------
- **(SECURITY BUG)** Started preventing open redirects in the
``aiohttp.web.normalize_path_middleware`` middleware. For
more details, see
https://github.com/aio-libs/aiohttp/security/advisories/GHSA-v6wp-4m6f-gcjg.
Thanks to `Beast Glatisant <https://github.com/g147>`__ for
finding the first instance of this issue and `Jelmer Vernooij
<https://jelmer.uk/>`__ for reporting and tracking it down
in aiohttp.
`#5497 <https://github.com/aio-libs/aiohttp/issues/5497>`_
- Fix interpretation difference of the pure-Python and the Cython-based
HTTP parsers construct a ``yarl.URL`` object for HTTP request-target.
Before this fix, the Python parser would turn the URI's absolute-path
for ``//some-path`` into ``/`` while the Cython code preserved it as
``//some-path``. Now, both do the latter.
`#5498 <https://github.com/aio-libs/aiohttp/issues/5498>`_
----
3.7.3 (2020-11-18)
==================
Features
--------
- Use Brotli instead of brotlipy
`#3803 <https://github.com/aio-libs/aiohttp/issues/3803>`_
- Made exceptions pickleable. Also changed the repr of some exceptions.
`#4077 <https://github.com/aio-libs/aiohttp/issues/4077>`_
Bugfixes
--------
- Raise a ClientResponseError instead of an AssertionError for a blank
HTTP Reason Phrase.
`#3532 <https://github.com/aio-libs/aiohttp/issues/3532>`_
- Fix ``web_middlewares.normalize_path_middleware`` behavior for patch without slash.
`#3669 <https://github.com/aio-libs/aiohttp/issues/3669>`_
- Fix overshadowing of overlapped sub-applications prefixes.
`#3701 <https://github.com/aio-libs/aiohttp/issues/3701>`_
- Make `BaseConnector.close()` a coroutine and wait until the client closes all connections. Drop deprecated "with Connector():" syntax.
`#3736 <https://github.com/aio-libs/aiohttp/issues/3736>`_
- Reset the ``sock_read`` timeout each time data is received for a ``aiohttp.client`` response.
`#3808 <https://github.com/aio-libs/aiohttp/issues/3808>`_
- Fixed type annotation for add_view method of UrlDispatcher to accept any subclass of View
`#3880 <https://github.com/aio-libs/aiohttp/issues/3880>`_
- Fixed querying the address families from DNS that the current host supports.
`#5156 <https://github.com/aio-libs/aiohttp/issues/5156>`_
- Change return type of MultipartReader.__aiter__() and BodyPartReader.__aiter__() to AsyncIterator.
`#5163 <https://github.com/aio-libs/aiohttp/issues/5163>`_
- Provide x86 Windows wheels.
`#5230 <https://github.com/aio-libs/aiohttp/issues/5230>`_
Improved Documentation
----------------------
- Add documentation for ``aiohttp.web.FileResponse``.
`#3958 <https://github.com/aio-libs/aiohttp/issues/3958>`_
- Removed deprecation warning in tracing example docs
`#3964 <https://github.com/aio-libs/aiohttp/issues/3964>`_
- Fixed wrong "Usage" docstring of ``aiohttp.client.request``.
`#4603 <https://github.com/aio-libs/aiohttp/issues/4603>`_
- Add aiohttp-pydantic to third party libraries
`#5228 <https://github.com/aio-libs/aiohttp/issues/5228>`_
Misc
----
- `#4102 <https://github.com/aio-libs/aiohttp/issues/4102>`_
----
3.7.2 (2020-10-27)
==================
Bugfixes
--------
- Fixed static files handling for loops without ``.sendfile()`` support
`#5149 <https://github.com/aio-libs/aiohttp/issues/5149>`_
----
3.7.1 (2020-10-25)
==================
Bugfixes
--------
- Fixed a type error caused by the conditional import of `Protocol`.
`#5111 <https://github.com/aio-libs/aiohttp/issues/5111>`_
- Server doesn't send Content-Length for 1xx or 204
`#4901 <https://github.com/aio-libs/aiohttp/issues/4901>`_
- Fix run_app typing
`#4957 <https://github.com/aio-libs/aiohttp/issues/4957>`_
- Always require ``typing_extensions`` library.
`#5107 <https://github.com/aio-libs/aiohttp/issues/5107>`_
- Fix a variable-shadowing bug causing `ThreadedResolver.resolve` to
return the resolved IP as the ``hostname`` in each record, which prevented
validation of HTTPS connections.
`#5110 <https://github.com/aio-libs/aiohttp/issues/5110>`_
- Added annotations to all public attributes.
`#5115 <https://github.com/aio-libs/aiohttp/issues/5115>`_
- Fix flaky test_when_timeout_smaller_second
`#5116 <https://github.com/aio-libs/aiohttp/issues/5116>`_
- Ensure sending a zero byte file does not throw an exception
`#5124 <https://github.com/aio-libs/aiohttp/issues/5124>`_
- Fix a bug in ``web.run_app()`` about Python version checking on Windows
`#5127 <https://github.com/aio-libs/aiohttp/issues/5127>`_
----
3.7.0 (2020-10-24)
==================
Features
--------
- Response headers are now prepared prior to running ``on_response_prepare`` hooks, directly before headers are sent to the client.
`#1958 <https://github.com/aio-libs/aiohttp/issues/1958>`_
- Add a ``quote_cookie`` option to ``CookieJar``, a way to skip quotation wrapping of cookies containing special characters.
`#2571 <https://github.com/aio-libs/aiohttp/issues/2571>`_
- Call ``AccessLogger.log`` with the current exception available from ``sys.exc_info()``.
`#3557 <https://github.com/aio-libs/aiohttp/issues/3557>`_
- `web.UrlDispatcher.add_routes` and `web.Application.add_routes` return a list
of registered `AbstractRoute` instances. `AbstractRouteDef.register` (and all
subclasses) return a list of registered resources registered resource.
`#3866 <https://github.com/aio-libs/aiohttp/issues/3866>`_
- Added properties of default ClientSession params to ClientSession class so it is available for introspection
`#3882 <https://github.com/aio-libs/aiohttp/issues/3882>`_
- Don't cancel web handler on peer disconnection, raise `OSError` on reading/writing instead.
`#4080 <https://github.com/aio-libs/aiohttp/issues/4080>`_
- Implement BaseRequest.get_extra_info() to access a protocol transports' extra info.
`#4189 <https://github.com/aio-libs/aiohttp/issues/4189>`_
- Added `ClientSession.timeout` property.
`#4191 <https://github.com/aio-libs/aiohttp/issues/4191>`_
- allow use of SameSite in cookies.
`#4224 <https://github.com/aio-libs/aiohttp/issues/4224>`_
- Use ``loop.sendfile()`` instead of custom implementation if available.
`#4269 <https://github.com/aio-libs/aiohttp/issues/4269>`_
- Apply SO_REUSEADDR to test server's socket.
`#4393 <https://github.com/aio-libs/aiohttp/issues/4393>`_
- Use .raw_host instead of slower .host in client API
`#4402 <https://github.com/aio-libs/aiohttp/issues/4402>`_
- Allow configuring the buffer size of input stream by passing ``read_bufsize`` argument.
`#4453 <https://github.com/aio-libs/aiohttp/issues/4453>`_
- Pass tests on Python 3.8 for Windows.
`#4513 <https://github.com/aio-libs/aiohttp/issues/4513>`_
- Add `method` and `url` attributes to `TraceRequestChunkSentParams` and `TraceResponseChunkReceivedParams`.
`#4674 <https://github.com/aio-libs/aiohttp/issues/4674>`_
- Add ClientResponse.ok property for checking status code under 400.
`#4711 <https://github.com/aio-libs/aiohttp/issues/4711>`_
- Don't ceil timeouts that are smaller than 5 seconds.
`#4850 <https://github.com/aio-libs/aiohttp/issues/4850>`_
- TCPSite now listens by default on all interfaces instead of just IPv4 when `None` is passed in as the host.
`#4894 <https://github.com/aio-libs/aiohttp/issues/4894>`_
- Bump ``http_parser`` to 2.9.4
`#5070 <https://github.com/aio-libs/aiohttp/issues/5070>`_
Bugfixes
--------
- Fix keepalive connections not being closed in time
`#3296 <https://github.com/aio-libs/aiohttp/issues/3296>`_
- Fix failed websocket handshake leaving connection hanging.
`#3380 <https://github.com/aio-libs/aiohttp/issues/3380>`_
- Fix tasks cancellation order on exit. The run_app task needs to be cancelled first for cleanup hooks to run with all tasks intact.
`#3805 <https://github.com/aio-libs/aiohttp/issues/3805>`_
- Don't start heartbeat until _writer is set
`#4062 <https://github.com/aio-libs/aiohttp/issues/4062>`_
- Fix handling of multipart file uploads without a content type.
`#4089 <https://github.com/aio-libs/aiohttp/issues/4089>`_
- Preserve view handler function attributes across middlewares
`#4174 <https://github.com/aio-libs/aiohttp/issues/4174>`_
- Fix the string representation of ``ServerDisconnectedError``.
`#4175 <https://github.com/aio-libs/aiohttp/issues/4175>`_
- Raising RuntimeError when trying to get encoding from not read body
`#4214 <https://github.com/aio-libs/aiohttp/issues/4214>`_
- Remove warning messages from noop.
`#4282 <https://github.com/aio-libs/aiohttp/issues/4282>`_
- Raise ClientPayloadError if FormData re-processed.
`#4345 <https://github.com/aio-libs/aiohttp/issues/4345>`_
- Fix a warning about unfinished task in ``web_protocol.py``
`#4408 <https://github.com/aio-libs/aiohttp/issues/4408>`_
- Fixed 'deflate' compression. According to RFC 2616 now.
`#4506 <https://github.com/aio-libs/aiohttp/issues/4506>`_
- Fixed OverflowError on platforms with 32-bit time_t
`#4515 <https://github.com/aio-libs/aiohttp/issues/4515>`_
- Fixed request.body_exists returns wrong value for methods without body.
`#4528 <https://github.com/aio-libs/aiohttp/issues/4528>`_
- Fix connecting to link-local IPv6 addresses.
`#4554 <https://github.com/aio-libs/aiohttp/issues/4554>`_
- Fix a problem with connection waiters that are never awaited.
`#4562 <https://github.com/aio-libs/aiohttp/issues/4562>`_
- Always make sure transport is not closing before reuse a connection.
Reuse a protocol based on keepalive in headers is unreliable.
For example, uWSGI will not support keepalive even it serves a
HTTP 1.1 request, except explicitly configure uWSGI with a
``--http-keepalive`` option.
Servers designed like uWSGI could cause aiohttp intermittently
raise a ConnectionResetException when the protocol poll runs
out and some protocol is reused.
`#4587 <https://github.com/aio-libs/aiohttp/issues/4587>`_
- Handle the last CRLF correctly even if it is received via separate TCP segment.
`#4630 <https://github.com/aio-libs/aiohttp/issues/4630>`_
- Fix the register_resource function to validate route name before splitting it so that route name can include python keywords.
`#4691 <https://github.com/aio-libs/aiohttp/issues/4691>`_
- Improve typing annotations for ``web.Request``, ``aiohttp.ClientResponse`` and
``multipart`` module.
`#4736 <https://github.com/aio-libs/aiohttp/issues/4736>`_
- Fix resolver task is not awaited when connector is cancelled
`#4795 <https://github.com/aio-libs/aiohttp/issues/4795>`_
- Fix a bug "Aiohttp doesn't return any error on invalid request methods"
`#4798 <https://github.com/aio-libs/aiohttp/issues/4798>`_
- Fix HEAD requests for static content.
`#4809 <https://github.com/aio-libs/aiohttp/issues/4809>`_
- Fix incorrect size calculation for memoryview
`#4890 <https://github.com/aio-libs/aiohttp/issues/4890>`_
- Add HTTPMove to _all__.
`#4897 <https://github.com/aio-libs/aiohttp/issues/4897>`_
- Fixed the type annotations in the ``tracing`` module.
`#4912 <https://github.com/aio-libs/aiohttp/issues/4912>`_
- Fix typing for multipart ``__aiter__``.
`#4931 <https://github.com/aio-libs/aiohttp/issues/4931>`_
- Fix for race condition on connections in BaseConnector that leads to exceeding the connection limit.
`#4936 <https://github.com/aio-libs/aiohttp/issues/4936>`_
- Add forced UTF-8 encoding for ``application/rdap+json`` responses.
`#4938 <https://github.com/aio-libs/aiohttp/issues/4938>`_
- Fix inconsistency between Python and C http request parsers in parsing pct-encoded URL.
`#4972 <https://github.com/aio-libs/aiohttp/issues/4972>`_
- Fix connection closing issue in HEAD request.
`#5012 <https://github.com/aio-libs/aiohttp/issues/5012>`_
- Fix type hint on BaseRunner.addresses (from ``List[str]`` to ``List[Any]``)
`#5086 <https://github.com/aio-libs/aiohttp/issues/5086>`_
- Make `web.run_app()` more responsive to Ctrl+C on Windows for Python < 3.8. It slightly
increases CPU load as a side effect.
`#5098 <https://github.com/aio-libs/aiohttp/issues/5098>`_
Improved Documentation
----------------------
- Fix example code in client quick-start
`#3376 <https://github.com/aio-libs/aiohttp/issues/3376>`_
- Updated the docs so there is no contradiction in ``ttl_dns_cache`` default value
`#3512 <https://github.com/aio-libs/aiohttp/issues/3512>`_
- Add 'Deploy with SSL' to docs.
`#4201 <https://github.com/aio-libs/aiohttp/issues/4201>`_
- Change typing of the secure argument on StreamResponse.set_cookie from ``Optional[str]`` to ``Optional[bool]``
`#4204 <https://github.com/aio-libs/aiohttp/issues/4204>`_
- Changes ``ttl_dns_cache`` type from int to Optional[int].
`#4270 <https://github.com/aio-libs/aiohttp/issues/4270>`_
- Simplify README hello word example and add a documentation page for people coming from requests.
`#4272 <https://github.com/aio-libs/aiohttp/issues/4272>`_
- Improve some code examples in the documentation involving websockets and starting a simple HTTP site with an AppRunner.
`#4285 <https://github.com/aio-libs/aiohttp/issues/4285>`_
- Fix typo in code example in Multipart docs
`#4312 <https://github.com/aio-libs/aiohttp/issues/4312>`_
- Fix code example in Multipart section.
`#4314 <https://github.com/aio-libs/aiohttp/issues/4314>`_
- Update contributing guide so new contributors read the most recent version of that guide. Update command used to create test coverage reporting.
`#4810 <https://github.com/aio-libs/aiohttp/issues/4810>`_
- Spelling: Change "canonize" to "canonicalize".
`#4986 <https://github.com/aio-libs/aiohttp/issues/4986>`_
- Add ``aiohttp-sse-client`` library to third party usage list.
`#5084 <https://github.com/aio-libs/aiohttp/issues/5084>`_
Misc
----
- `#2856 <https://github.com/aio-libs/aiohttp/issues/2856>`_, `#4218 <https://github.com/aio-libs/aiohttp/issues/4218>`_, `#4250 <https://github.com/aio-libs/aiohttp/issues/4250>`_
----
3.6.3 (2020-10-12)
==================
Bugfixes
--------
- Pin yarl to ``<1.6.0`` to avoid buggy behavior that will be fixed by the next aiohttp
release.
3.6.2 (2019-10-09)
==================
Features
--------
- Made exceptions pickleable. Also changed the repr of some exceptions.
`#4077 <https://github.com/aio-libs/aiohttp/issues/4077>`_
- Use ``Iterable`` type hint instead of ``Sequence`` for ``Application`` *middleware*
parameter. `#4125 <https://github.com/aio-libs/aiohttp/issues/4125>`_
Bugfixes
--------
- Reset the ``sock_read`` timeout each time data is received for a
``aiohttp.ClientResponse``. `#3808
<https://github.com/aio-libs/aiohttp/issues/3808>`_
- Fix handling of expired cookies so they are not stored in CookieJar.
`#4063 <https://github.com/aio-libs/aiohttp/issues/4063>`_
- Fix misleading message in the string representation of ``ClientConnectorError``;
``self.ssl == None`` means default SSL context, not SSL disabled `#4097
<https://github.com/aio-libs/aiohttp/issues/4097>`_
- Don't clobber HTTP status when using FileResponse.
`#4106 <https://github.com/aio-libs/aiohttp/issues/4106>`_
Improved Documentation
----------------------
- Added minimal required logging configuration to logging documentation.
`#2469 <https://github.com/aio-libs/aiohttp/issues/2469>`_
- Update docs to reflect proxy support.
`#4100 <https://github.com/aio-libs/aiohttp/issues/4100>`_
- Fix typo in code example in testing docs.
`#4108 <https://github.com/aio-libs/aiohttp/issues/4108>`_
Misc
----
- `#4102 <https://github.com/aio-libs/aiohttp/issues/4102>`_
----
3.6.1 (2019-09-19)
==================
Features
--------
- Compatibility with Python 3.8.
`#4056 <https://github.com/aio-libs/aiohttp/issues/4056>`_
Bugfixes
--------
- correct some exception string format
`#4068 <https://github.com/aio-libs/aiohttp/issues/4068>`_
- Emit a warning when ``ssl.OP_NO_COMPRESSION`` is
unavailable because the runtime is built against
an outdated OpenSSL.
`#4052 <https://github.com/aio-libs/aiohttp/issues/4052>`_
- Update multidict requirement to >= 4.5
`#4057 <https://github.com/aio-libs/aiohttp/issues/4057>`_
Improved Documentation
----------------------
- Provide pytest-aiohttp namespace for pytest fixtures in docs.
`#3723 <https://github.com/aio-libs/aiohttp/issues/3723>`_
----
3.6.0 (2019-09-06)
==================
Features
--------
- Add support for Named Pipes (Site and Connector) under Windows. This feature requires
Proactor event loop to work. `#3629
<https://github.com/aio-libs/aiohttp/issues/3629>`_
- Removed ``Transfer-Encoding: chunked`` header from websocket responses to be
compatible with more http proxy servers. `#3798
<https://github.com/aio-libs/aiohttp/issues/3798>`_
- Accept non-GET request for starting websocket handshake on server side.
`#3980 <https://github.com/aio-libs/aiohttp/issues/3980>`_
Bugfixes
--------
- Raise a ClientResponseError instead of an AssertionError for a blank
HTTP Reason Phrase.
`#3532 <https://github.com/aio-libs/aiohttp/issues/3532>`_
- Fix an issue where cookies would sometimes not be set during a redirect.
`#3576 <https://github.com/aio-libs/aiohttp/issues/3576>`_
- Change normalize_path_middleware to use 308 redirect instead of 301.
This behavior should prevent clients from being unable to use PUT/POST
methods on endpoints that are redirected because of a trailing slash.
`#3579 <https://github.com/aio-libs/aiohttp/issues/3579>`_
- Drop the processed task from ``all_tasks()`` list early. It prevents logging about a
task with unhandled exception when the server is used in conjunction with
``asyncio.run()``. `#3587 <https://github.com/aio-libs/aiohttp/issues/3587>`_
- ``Signal`` type annotation changed from ``Signal[Callable[['TraceConfig'],
Awaitable[None]]]`` to ``Signal[Callable[ClientSession, SimpleNamespace, ...]``.
`#3595 <https://github.com/aio-libs/aiohttp/issues/3595>`_
- Use sanitized URL as Location header in redirects
`#3614 <https://github.com/aio-libs/aiohttp/issues/3614>`_
- Improve typing annotations for multipart.py along with changes required
by mypy in files that references multipart.py.
`#3621 <https://github.com/aio-libs/aiohttp/issues/3621>`_
- Close session created inside ``aiohttp.request`` when unhandled exception occurs
`#3628 <https://github.com/aio-libs/aiohttp/issues/3628>`_
- Cleanup per-chunk data in generic data read. Memory leak fixed.
`#3631 <https://github.com/aio-libs/aiohttp/issues/3631>`_
- Use correct type for add_view and family
`#3633 <https://github.com/aio-libs/aiohttp/issues/3633>`_
- Fix _keepalive field in __slots__ of ``RequestHandler``.
`#3644 <https://github.com/aio-libs/aiohttp/issues/3644>`_
- Properly handle ConnectionResetError, to silence the "Cannot write to closing
transport" exception when clients disconnect uncleanly.
`#3648 <https://github.com/aio-libs/aiohttp/issues/3648>`_
- Suppress pytest warnings due to ``test_utils`` classes
`#3660 <https://github.com/aio-libs/aiohttp/issues/3660>`_
- Fix overshadowing of overlapped sub-application prefixes.
`#3701 <https://github.com/aio-libs/aiohttp/issues/3701>`_
- Fixed return type annotation for WSMessage.json()
`#3720 <https://github.com/aio-libs/aiohttp/issues/3720>`_
- Properly expose TooManyRedirects publicly as documented.
`#3818 <https://github.com/aio-libs/aiohttp/issues/3818>`_
- Fix missing brackets for IPv6 in proxy CONNECT request
`#3841 <https://github.com/aio-libs/aiohttp/issues/3841>`_
- Make the signature of ``aiohttp.test_utils.TestClient.request`` match
``asyncio.ClientSession.request`` according to the docs `#3852
<https://github.com/aio-libs/aiohttp/issues/3852>`_
- Use correct style for re-exported imports, makes mypy ``--strict`` mode happy.
`#3868 <https://github.com/aio-libs/aiohttp/issues/3868>`_
- Fixed type annotation for add_view method of UrlDispatcher to accept any subclass of
View `#3880 <https://github.com/aio-libs/aiohttp/issues/3880>`_
- Made cython HTTP parser set Reason-Phrase of the response to an empty string if it is
missing. `#3906 <https://github.com/aio-libs/aiohttp/issues/3906>`_
- Add URL to the string representation of ClientResponseError.
`#3959 <https://github.com/aio-libs/aiohttp/issues/3959>`_
- Accept ``istr`` keys in ``LooseHeaders`` type hints.
`#3976 <https://github.com/aio-libs/aiohttp/issues/3976>`_
- Fixed race conditions in _resolve_host caching and throttling when tracing is enabled.
`#4013 <https://github.com/aio-libs/aiohttp/issues/4013>`_
- For URLs like "unix://localhost/..." set Host HTTP header to "localhost" instead of
"localhost:None". `#4039 <https://github.com/aio-libs/aiohttp/issues/4039>`_
Improved Documentation
----------------------
- Modify documentation for Background Tasks to remove deprecated usage of event loop.
`#3526 <https://github.com/aio-libs/aiohttp/issues/3526>`_
- use ``if __name__ == '__main__':`` in server examples.
`#3775 <https://github.com/aio-libs/aiohttp/issues/3775>`_
- Update documentation reference to the default access logger.
`#3783 <https://github.com/aio-libs/aiohttp/issues/3783>`_
- Improve documentation for ``web.BaseRequest.path`` and ``web.BaseRequest.raw_path``.
`#3791 <https://github.com/aio-libs/aiohttp/issues/3791>`_
- Removed deprecation warning in tracing example docs
`#3964 <https://github.com/aio-libs/aiohttp/issues/3964>`_
----
3.5.4 (2019-01-12)
==================
Bugfixes
--------
- Fix stream ``.read()`` / ``.readany()`` / ``.iter_any()`` which used to return a
partial content only in case of compressed content
`#3525 <https://github.com/aio-libs/aiohttp/issues/3525>`_
3.5.3 (2019-01-10)
==================
Bugfixes
--------
- Fix type stubs for ``aiohttp.web.run_app(access_log=True)`` and fix edge case of
``access_log=True`` and the event loop being in debug mode. `#3504
<https://github.com/aio-libs/aiohttp/issues/3504>`_
- Fix ``aiohttp.ClientTimeout`` type annotations to accept ``None`` for fields
`#3511 <https://github.com/aio-libs/aiohttp/issues/3511>`_
- Send custom per-request cookies even if session jar is empty
`#3515 <https://github.com/aio-libs/aiohttp/issues/3515>`_
- Restore Linux binary wheels publishing on PyPI
----
3.5.2 (2019-01-08)
==================
Features
--------
- ``FileResponse`` from ``web_fileresponse.py`` uses a ``ThreadPoolExecutor`` to work
with files asynchronously. I/O based payloads from ``payload.py`` uses a
``ThreadPoolExecutor`` to work with I/O objects asynchronously. `#3313
<https://github.com/aio-libs/aiohttp/issues/3313>`_
- Internal Server Errors in plain text if the browser does not support HTML.
`#3483 <https://github.com/aio-libs/aiohttp/issues/3483>`_
Bugfixes
--------
- Preserve MultipartWriter parts headers on write. Refactor the way how
``Payload.headers`` are handled. Payload instances now always have headers and
Content-Type defined. Fix Payload Content-Disposition header reset after initial
creation. `#3035 <https://github.com/aio-libs/aiohttp/issues/3035>`_
- Log suppressed exceptions in ``GunicornWebWorker``.
`#3464 <https://github.com/aio-libs/aiohttp/issues/3464>`_
- Remove wildcard imports.
`#3468 <https://github.com/aio-libs/aiohttp/issues/3468>`_
- Use the same task for app initialization and web server handling in gunicorn workers.
It allows to use Python3.7 context vars smoothly.
`#3471 <https://github.com/aio-libs/aiohttp/issues/3471>`_
- Fix handling of chunked+gzipped response when first chunk does not give uncompressed
data `#3477 <https://github.com/aio-libs/aiohttp/issues/3477>`_
- Replace ``collections.MutableMapping`` with ``collections.abc.MutableMapping`` to
avoid a deprecation warning. `#3480
<https://github.com/aio-libs/aiohttp/issues/3480>`_
- ``Payload.size`` type annotation changed from ``Optional[float]`` to
``Optional[int]``. `#3484 <https://github.com/aio-libs/aiohttp/issues/3484>`_
- Ignore done tasks when cancels pending activities on ``web.run_app`` finalization.
`#3497 <https://github.com/aio-libs/aiohttp/issues/3497>`_
Improved Documentation
----------------------
- Add documentation for ``aiohttp.web.HTTPException``.
`#3490 <https://github.com/aio-libs/aiohttp/issues/3490>`_
Misc
----
- `#3487 <https://github.com/aio-libs/aiohttp/issues/3487>`_
----
3.5.1 (2018-12-24)
====================
- Fix a regression about ``ClientSession._requote_redirect_url`` modification in debug
mode.
3.5.0 (2018-12-22)
====================
Features
--------
- The library type annotations are checked in strict mode now.
- Add support for setting cookies for individual request (`#2387
<https://github.com/aio-libs/aiohttp/pull/2387>`_)
- Application.add_domain implementation (`#2809
<https://github.com/aio-libs/aiohttp/pull/2809>`_)
- The default ``app`` in the request returned by ``test_utils.make_mocked_request`` can
now have objects assigned to it and retrieved using the ``[]`` operator. (`#3174
<https://github.com/aio-libs/aiohttp/pull/3174>`_)
- Make ``request.url`` accessible when transport is closed. (`#3177
<https://github.com/aio-libs/aiohttp/pull/3177>`_)
- Add ``zlib_executor_size`` argument to ``Response`` constructor to allow compression
to run in a background executor to avoid blocking the main thread and potentially
triggering health check failures. (`#3205
<https://github.com/aio-libs/aiohttp/pull/3205>`_)
- Enable users to set ``ClientTimeout`` in ``aiohttp.request`` (`#3213
<https://github.com/aio-libs/aiohttp/pull/3213>`_)
- Don't raise a warning if ``NETRC`` environment variable is not set and ``~/.netrc``
file doesn't exist. (`#3267 <https://github.com/aio-libs/aiohttp/pull/3267>`_)
- Add default logging handler to web.run_app If the ``Application.debug``` flag is set
and the default logger ``aiohttp.access`` is used, access logs will now be output
using a *stderr* ``StreamHandler`` if no handlers are attached. Furthermore, if the
default logger has no log level set, the log level will be set to ``DEBUG``. (`#3324
<https://github.com/aio-libs/aiohttp/pull/3324>`_)
- Add method argument to ``session.ws_connect()``. Sometimes server API requires a
different HTTP method for WebSocket connection establishment. For example, ``Docker
exec`` needs POST. (`#3378 <https://github.com/aio-libs/aiohttp/pull/3378>`_)
- Create a task per request handling. (`#3406
<https://github.com/aio-libs/aiohttp/pull/3406>`_)
Bugfixes
--------
- Enable passing ``access_log_class`` via ``handler_args`` (`#3158
<https://github.com/aio-libs/aiohttp/pull/3158>`_)
- Return empty bytes with end-of-chunk marker in empty stream reader. (`#3186
<https://github.com/aio-libs/aiohttp/pull/3186>`_)
- Accept ``CIMultiDictProxy`` instances for ``headers`` argument in ``web.Response``
constructor. (`#3207 <https://github.com/aio-libs/aiohttp/pull/3207>`_)
- Don't uppercase HTTP method in parser (`#3233
<https://github.com/aio-libs/aiohttp/pull/3233>`_)
- Make method match regexp RFC-7230 compliant (`#3235
<https://github.com/aio-libs/aiohttp/pull/3235>`_)
- Add ``app.pre_frozen`` state to properly handle startup signals in
sub-applications. (`#3237 <https://github.com/aio-libs/aiohttp/pull/3237>`_)
- Enhanced parsing and validation of helpers.BasicAuth.decode. (`#3239
<https://github.com/aio-libs/aiohttp/pull/3239>`_)
- Change imports from collections module in preparation for 3.8. (`#3258
<https://github.com/aio-libs/aiohttp/pull/3258>`_)
- Ensure Host header is added first to ClientRequest to better replicate browser (`#3265
<https://github.com/aio-libs/aiohttp/pull/3265>`_)
- Fix forward compatibility with Python 3.8: importing ABCs directly from the
collections module will not be supported anymore. (`#3273
<https://github.com/aio-libs/aiohttp/pull/3273>`_)
- Keep the query string by ``normalize_path_middleware``. (`#3278
<https://github.com/aio-libs/aiohttp/pull/3278>`_)
- Fix missing parameter ``raise_for_status`` for aiohttp.request() (`#3290
<https://github.com/aio-libs/aiohttp/pull/3290>`_)
- Bracket IPv6 addresses in the HOST header (`#3304
<https://github.com/aio-libs/aiohttp/pull/3304>`_)
- Fix default message for server ping and pong frames. (`#3308
<https://github.com/aio-libs/aiohttp/pull/3308>`_)
- Fix tests/test_connector.py typo and tests/autobahn/server.py duplicate loop
def. (`#3337 <https://github.com/aio-libs/aiohttp/pull/3337>`_)
- Fix false-negative indicator end_of_HTTP_chunk in StreamReader.readchunk function
(`#3361 <https://github.com/aio-libs/aiohttp/pull/3361>`_)
- Release HTTP response before raising status exception (`#3364
<https://github.com/aio-libs/aiohttp/pull/3364>`_)
- Fix task cancellation when ``sendfile()`` syscall is used by static file
handling. (`#3383 <https://github.com/aio-libs/aiohttp/pull/3383>`_)
- Fix stack trace for ``asyncio.TimeoutError`` which was not logged, when it is caught
in the handler. (`#3414 <https://github.com/aio-libs/aiohttp/pull/3414>`_)
Improved Documentation
----------------------
- Improve documentation of ``Application.make_handler`` parameters. (`#3152
<https://github.com/aio-libs/aiohttp/pull/3152>`_)
- Fix BaseRequest.raw_headers doc. (`#3215
<https://github.com/aio-libs/aiohttp/pull/3215>`_)
- Fix typo in TypeError exception reason in ``web.Application._handle`` (`#3229
<https://github.com/aio-libs/aiohttp/pull/3229>`_)
- Make server access log format placeholder %b documentation reflect
behavior and docstring. (`#3307 <https://github.com/aio-libs/aiohttp/pull/3307>`_)
Deprecations and Removals
-------------------------
- Deprecate modification of ``session.requote_redirect_url`` (`#2278
<https://github.com/aio-libs/aiohttp/pull/2278>`_)
- Deprecate ``stream.unread_data()`` (`#3260
<https://github.com/aio-libs/aiohttp/pull/3260>`_)
- Deprecated use of boolean in ``resp.enable_compression()`` (`#3318
<https://github.com/aio-libs/aiohttp/pull/3318>`_)
- Encourage creation of aiohttp public objects inside a coroutine (`#3331
<https://github.com/aio-libs/aiohttp/pull/3331>`_)
- Drop dead ``Connection.detach()`` and ``Connection.writer``. Both methods were broken
for more than 2 years. (`#3358 <https://github.com/aio-libs/aiohttp/pull/3358>`_)
- Deprecate ``app.loop``, ``request.loop``, ``client.loop`` and ``connector.loop``
properties. (`#3374 <https://github.com/aio-libs/aiohttp/pull/3374>`_)
- Deprecate explicit debug argument. Use asyncio debug mode instead. (`#3381
<https://github.com/aio-libs/aiohttp/pull/3381>`_)
- Deprecate body parameter in HTTPException (and derived classes) constructor. (`#3385
<https://github.com/aio-libs/aiohttp/pull/3385>`_)
- Deprecate bare connector close, use ``async with connector:`` and ``await
connector.close()`` instead. (`#3417
<https://github.com/aio-libs/aiohttp/pull/3417>`_)
- Deprecate obsolete ``read_timeout`` and ``conn_timeout`` in ``ClientSession``
constructor. (`#3438 <https://github.com/aio-libs/aiohttp/pull/3438>`_)
Misc
----
- #3341, #3351
Platform: UNKNOWN
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Development Status :: 5 - Production/Stable
Classifier: Operating System :: POSIX
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Operating System :: Microsoft :: Windows
Classifier: Topic :: Internet :: WWW/HTTP
Classifier: Framework :: AsyncIO
Requires-Python: >=3.6
Provides-Extra: speedups

View file

@ -1,246 +0,0 @@
CHANGES.rst
CONTRIBUTORS.txt
LICENSE.txt
MANIFEST.in
Makefile
README.rst
pyproject.toml
setup.cfg
setup.py
aiohttp/__init__.py
aiohttp/_cparser.pxd
aiohttp/_find_header.c
aiohttp/_find_header.h
aiohttp/_find_header.pxd
aiohttp/_frozenlist.c
aiohttp/_frozenlist.pyx
aiohttp/_headers.pxi
aiohttp/_helpers.c
aiohttp/_helpers.pyi
aiohttp/_helpers.pyx
aiohttp/_http_parser.c
aiohttp/_http_parser.pyx
aiohttp/_http_writer.c
aiohttp/_http_writer.pyx
aiohttp/_websocket.c
aiohttp/_websocket.pyx
aiohttp/abc.py
aiohttp/base_protocol.py
aiohttp/client.py
aiohttp/client_exceptions.py
aiohttp/client_proto.py
aiohttp/client_reqrep.py
aiohttp/client_ws.py
aiohttp/connector.py
aiohttp/cookiejar.py
aiohttp/formdata.py
aiohttp/frozenlist.py
aiohttp/frozenlist.pyi
aiohttp/hdrs.py
aiohttp/helpers.py
aiohttp/http.py
aiohttp/http_exceptions.py
aiohttp/http_parser.py
aiohttp/http_websocket.py
aiohttp/http_writer.py
aiohttp/locks.py
aiohttp/log.py
aiohttp/multipart.py
aiohttp/payload.py
aiohttp/payload_streamer.py
aiohttp/py.typed
aiohttp/pytest_plugin.py
aiohttp/resolver.py
aiohttp/signals.py
aiohttp/signals.pyi
aiohttp/streams.py
aiohttp/tcp_helpers.py
aiohttp/test_utils.py
aiohttp/tracing.py
aiohttp/typedefs.py
aiohttp/web.py
aiohttp/web_app.py
aiohttp/web_exceptions.py
aiohttp/web_fileresponse.py
aiohttp/web_log.py
aiohttp/web_middlewares.py
aiohttp/web_protocol.py
aiohttp/web_request.py
aiohttp/web_response.py
aiohttp/web_routedef.py
aiohttp/web_runner.py
aiohttp/web_server.py
aiohttp/web_urldispatcher.py
aiohttp/web_ws.py
aiohttp/worker.py
aiohttp.egg-info/PKG-INFO
aiohttp.egg-info/SOURCES.txt
aiohttp.egg-info/dependency_links.txt
aiohttp.egg-info/requires.txt
aiohttp.egg-info/top_level.txt
aiohttp/.hash/_cparser.pxd.hash
aiohttp/.hash/_find_header.pxd.hash
aiohttp/.hash/_frozenlist.pyx.hash
aiohttp/.hash/_helpers.pyi.hash
aiohttp/.hash/_helpers.pyx.hash
aiohttp/.hash/_http_parser.pyx.hash
aiohttp/.hash/_http_writer.pyx.hash
aiohttp/.hash/_websocket.pyx.hash
aiohttp/.hash/frozenlist.pyi.hash
aiohttp/.hash/hdrs.py.hash
aiohttp/.hash/signals.pyi.hash
docs/Makefile
docs/abc.rst
docs/aiohttp-icon.svg
docs/aiohttp-plain.svg
docs/built_with.rst
docs/changes.rst
docs/client.rst
docs/client_advanced.rst
docs/client_quickstart.rst
docs/client_reference.rst
docs/conf.py
docs/contributing.rst
docs/deployment.rst
docs/essays.rst
docs/external.rst
docs/faq.rst
docs/favicon.ico
docs/glossary.rst
docs/http_request_lifecycle.rst
docs/index.rst
docs/logging.rst
docs/make.bat
docs/migration_to_2xx.rst
docs/misc.rst
docs/multipart.rst
docs/multipart_reference.rst
docs/new_router.rst
docs/old-logo.png
docs/old-logo.svg
docs/powered_by.rst
docs/signals.rst
docs/spelling_wordlist.txt
docs/streams.rst
docs/structures.rst
docs/testing.rst
docs/third_party.rst
docs/tracing_reference.rst
docs/utilities.rst
docs/web.rst
docs/web_advanced.rst
docs/web_lowlevel.rst
docs/web_quickstart.rst
docs/web_reference.rst
docs/websocket_utilities.rst
docs/whats_new_1_1.rst
docs/whats_new_3_0.rst
docs/_static/aiohttp-icon-128x128.png
examples/background_tasks.py
examples/cli_app.py
examples/client_auth.py
examples/client_json.py
examples/client_ws.py
examples/curl.py
examples/fake_server.py
examples/lowlevel_srv.py
examples/server.crt
examples/server.csr
examples/server.key
examples/server_simple.py
examples/static_files.py
examples/web_classview.py
examples/web_cookies.py
examples/web_rewrite_headers_middleware.py
examples/web_srv.py
examples/web_srv_route_deco.py
examples/web_srv_route_table.py
examples/web_ws.py
examples/websocket.html
examples/legacy/crawl.py
examples/legacy/srv.py
examples/legacy/tcp_protocol_parser.py
tests/aiohttp.jpg
tests/aiohttp.png
tests/conftest.py
tests/data.unknown_mime_type
tests/data.zero_bytes
tests/hello.txt.gz
tests/test_base_protocol.py
tests/test_classbasedview.py
tests/test_client_connection.py
tests/test_client_exceptions.py
tests/test_client_fingerprint.py
tests/test_client_functional.py
tests/test_client_proto.py
tests/test_client_request.py
tests/test_client_response.py
tests/test_client_session.py
tests/test_client_ws.py
tests/test_client_ws_functional.py
tests/test_connector.py
tests/test_cookiejar.py
tests/test_flowcontrol_streams.py
tests/test_formdata.py
tests/test_frozenlist.py
tests/test_helpers.py
tests/test_http_exceptions.py
tests/test_http_parser.py
tests/test_http_writer.py
tests/test_locks.py
tests/test_loop.py
tests/test_multipart.py
tests/test_multipart_helpers.py
tests/test_payload.py
tests/test_proxy.py
tests/test_proxy_functional.py
tests/test_pytest_plugin.py
tests/test_resolver.py
tests/test_route_def.py
tests/test_run_app.py
tests/test_signals.py
tests/test_streams.py
tests/test_tcp_helpers.py
tests/test_test_utils.py
tests/test_tracing.py
tests/test_urldispatch.py
tests/test_web_app.py
tests/test_web_cli.py
tests/test_web_exceptions.py
tests/test_web_functional.py
tests/test_web_log.py
tests/test_web_middleware.py
tests/test_web_protocol.py
tests/test_web_request.py
tests/test_web_request_handler.py
tests/test_web_response.py
tests/test_web_runner.py
tests/test_web_sendfile.py
tests/test_web_sendfile_functional.py
tests/test_web_server.py
tests/test_web_urldispatcher.py
tests/test_web_websocket.py
tests/test_web_websocket_functional.py
tests/test_websocket_handshake.py
tests/test_websocket_parser.py
tests/test_websocket_writer.py
tests/test_worker.py
tests/autobahn/client.py
tests/autobahn/fuzzingclient.json
tests/autobahn/fuzzingserver.json
tests/autobahn/server.py
vendor/http-parser/.git
vendor/http-parser/.gitignore
vendor/http-parser/.mailmap
vendor/http-parser/.travis.yml
vendor/http-parser/AUTHORS
vendor/http-parser/LICENSE-MIT
vendor/http-parser/Makefile
vendor/http-parser/README.md
vendor/http-parser/bench.c
vendor/http-parser/http_parser.c
vendor/http-parser/http_parser.gyp
vendor/http-parser/http_parser.h
vendor/http-parser/test.c
vendor/http-parser/contrib/parsertrace.c
vendor/http-parser/contrib/url_parser.c

View file

@ -1,14 +0,0 @@
attrs>=17.3.0
chardet<5.0,>=2.0
multidict<7.0,>=4.5
async_timeout<4.0,>=3.0
yarl<2.0,>=1.0
typing_extensions>=3.6.5
[:python_version < "3.7"]
idna-ssl>=1.0
[speedups]
aiodns
brotlipy
cchardet

View file

@ -1 +0,0 @@
aiohttp

View file

@ -1,187 +0,0 @@
Metadata-Version: 2.1
Name: coverage
Version: 5.1
Summary: Code coverage measurement for Python
Home-page: https://github.com/nedbat/coveragepy
Author: Ned Batchelder and 131 others
Author-email: ned@nedbatchelder.com
License: Apache 2.0
Project-URL: Documentation, https://coverage.readthedocs.io
Project-URL: Funding, https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=pypi
Project-URL: Issues, https://github.com/nedbat/coveragepy/issues
Description: .. Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
.. For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
===========
Coverage.py
===========
Code coverage testing for Python.
| |license| |versions| |status|
| |ci-status| |win-ci-status| |docs| |codecov|
| |kit| |format| |repos|
| |stars| |forks| |contributors|
| |tidelift| |twitter-coveragepy| |twitter-nedbat|
Coverage.py measures code coverage, typically during test execution. It uses
the code analysis tools and tracing hooks provided in the Python standard
library to determine which lines are executable, and which have been executed.
Coverage.py runs on many versions of Python:
* CPython 2.7.
* CPython 3.5 through 3.9 alpha 4.
* PyPy2 7.3.0 and PyPy3 7.3.0.
Documentation is on `Read the Docs`_. Code repository and issue tracker are on
`GitHub`_.
.. _Read the Docs: https://coverage.readthedocs.io/
.. _GitHub: https://github.com/nedbat/coveragepy
**New in 5.0:** SQLite data storage, JSON report, contexts, relative filenames,
dropped support for Python 2.6, 3.3 and 3.4.
For Enterprise
--------------
.. |tideliftlogo| image:: https://nedbatchelder.com/pix/Tidelift_Logo_small.png
:width: 75
:alt: Tidelift
:target: https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme
.. list-table::
:widths: 10 100
* - |tideliftlogo|
- `Available as part of the Tidelift Subscription. <https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme>`_
Coverage and thousands of other packages are working with
Tidelift to deliver one enterprise subscription that covers all of the open
source you use. If you want the flexibility of open source and the confidence
of commercial-grade software, this is for you.
`Learn more. <https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme>`_
Getting Started
---------------
See the `Quick Start section`_ of the docs.
.. _Quick Start section: https://coverage.readthedocs.io/#quick-start
Change history
--------------
The complete history of changes is on the `change history page`_.
.. _change history page: https://coverage.readthedocs.io/en/latest/changes.html
Contributing
------------
See the `Contributing section`_ of the docs.
.. _Contributing section: https://coverage.readthedocs.io/en/latest/contributing.html
Security
--------
To report a security vulnerability, please use the `Tidelift security
contact`_. Tidelift will coordinate the fix and disclosure.
.. _Tidelift security contact: https://tidelift.com/security
License
-------
Licensed under the `Apache 2.0 License`_. For details, see `NOTICE.txt`_.
.. _Apache 2.0 License: http://www.apache.org/licenses/LICENSE-2.0
.. _NOTICE.txt: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
.. |ci-status| image:: https://travis-ci.com/nedbat/coveragepy.svg?branch=master
:target: https://travis-ci.com/nedbat/coveragepy
:alt: Build status
.. |win-ci-status| image:: https://ci.appveyor.com/api/projects/status/kmeqpdje7h9r6vsf/branch/master?svg=true
:target: https://ci.appveyor.com/project/nedbat/coveragepy
:alt: Windows build status
.. |docs| image:: https://readthedocs.org/projects/coverage/badge/?version=latest&style=flat
:target: https://coverage.readthedocs.io/
:alt: Documentation
.. |reqs| image:: https://requires.io/github/nedbat/coveragepy/requirements.svg?branch=master
:target: https://requires.io/github/nedbat/coveragepy/requirements/?branch=master
:alt: Requirements status
.. |kit| image:: https://badge.fury.io/py/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: PyPI status
.. |format| image:: https://img.shields.io/pypi/format/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: Kit format
.. |downloads| image:: https://img.shields.io/pypi/dw/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: Weekly PyPI downloads
.. |versions| image:: https://img.shields.io/pypi/pyversions/coverage.svg?logo=python&logoColor=FBE072
:target: https://pypi.org/project/coverage/
:alt: Python versions supported
.. |status| image:: https://img.shields.io/pypi/status/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: Package stability
.. |license| image:: https://img.shields.io/pypi/l/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: License
.. |codecov| image:: https://codecov.io/github/nedbat/coveragepy/coverage.svg?branch=master&precision=2
:target: https://codecov.io/github/nedbat/coveragepy?branch=master
:alt: Coverage!
.. |repos| image:: https://repology.org/badge/tiny-repos/python:coverage.svg
:target: https://repology.org/metapackage/python:coverage/versions
:alt: Packaging status
.. |tidelift| image:: https://tidelift.com/badges/package/pypi/coverage
:target: https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme
:alt: Tidelift
.. |stars| image:: https://img.shields.io/github/stars/nedbat/coveragepy.svg?logo=github
:target: https://github.com/nedbat/coveragepy/stargazers
:alt: Github stars
.. |forks| image:: https://img.shields.io/github/forks/nedbat/coveragepy.svg?logo=github
:target: https://github.com/nedbat/coveragepy/network/members
:alt: Github forks
.. |contributors| image:: https://img.shields.io/github/contributors/nedbat/coveragepy.svg?logo=github
:target: https://github.com/nedbat/coveragepy/graphs/contributors
:alt: Contributors
.. |twitter-coveragepy| image:: https://img.shields.io/twitter/follow/coveragepy.svg?label=coveragepy&style=flat&logo=twitter&logoColor=4FADFF
:target: https://twitter.com/coveragepy
:alt: coverage.py on Twitter
.. |twitter-nedbat| image:: https://img.shields.io/twitter/follow/nedbat.svg?label=nedbat&style=flat&logo=twitter&logoColor=4FADFF
:target: https://twitter.com/nedbat
:alt: nedbat on Twitter
Keywords: code coverage testing
Platform: UNKNOWN
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Classifier: Topic :: Software Development :: Quality Assurance
Classifier: Topic :: Software Development :: Testing
Classifier: Development Status :: 5 - Production/Stable
Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4
Description-Content-Type: text/x-rst
Provides-Extra: toml

View file

@ -1,287 +0,0 @@
.editorconfig
.readthedocs.yml
.travis.yml
CHANGES.rst
CONTRIBUTORS.txt
LICENSE.txt
MANIFEST.in
Makefile
NOTICE.txt
README.rst
__main__.py
appveyor.yml
howto.txt
igor.py
metacov.ini
pylintrc
setup.cfg
setup.py
tox.ini
tox_wheels.ini
ci/README.txt
ci/download_appveyor.py
ci/install.ps1
ci/manylinux.sh
ci/run_with_env.cmd
ci/upload_relnotes.py
coverage/__init__.py
coverage/__main__.py
coverage/annotate.py
coverage/backunittest.py
coverage/backward.py
coverage/bytecode.py
coverage/cmdline.py
coverage/collector.py
coverage/config.py
coverage/context.py
coverage/control.py
coverage/data.py
coverage/debug.py
coverage/disposition.py
coverage/env.py
coverage/execfile.py
coverage/files.py
coverage/html.py
coverage/inorout.py
coverage/jsonreport.py
coverage/misc.py
coverage/multiproc.py
coverage/numbits.py
coverage/optional.py
coverage/parser.py
coverage/phystokens.py
coverage/plugin.py
coverage/plugin_support.py
coverage/python.py
coverage/pytracer.py
coverage/report.py
coverage/results.py
coverage/sqldata.py
coverage/summary.py
coverage/templite.py
coverage/tomlconfig.py
coverage/version.py
coverage/xmlreport.py
coverage.egg-info/PKG-INFO
coverage.egg-info/SOURCES.txt
coverage.egg-info/dependency_links.txt
coverage.egg-info/entry_points.txt
coverage.egg-info/not-zip-safe
coverage.egg-info/requires.txt
coverage.egg-info/top_level.txt
coverage/ctracer/datastack.c
coverage/ctracer/datastack.h
coverage/ctracer/filedisp.c
coverage/ctracer/filedisp.h
coverage/ctracer/module.c
coverage/ctracer/stats.h
coverage/ctracer/tracer.c
coverage/ctracer/tracer.h
coverage/ctracer/util.h
coverage/fullcoverage/encodings.py
coverage/htmlfiles/coverage_html.js
coverage/htmlfiles/index.html
coverage/htmlfiles/jquery.ba-throttle-debounce.min.js
coverage/htmlfiles/jquery.hotkeys.js
coverage/htmlfiles/jquery.isonscreen.js
coverage/htmlfiles/jquery.min.js
coverage/htmlfiles/jquery.tablesorter.min.js
coverage/htmlfiles/keybd_closed.png
coverage/htmlfiles/keybd_open.png
coverage/htmlfiles/pyfile.html
coverage/htmlfiles/style.css
coverage/htmlfiles/style.scss
doc/api.rst
doc/api_coverage.rst
doc/api_coveragedata.rst
doc/api_module.rst
doc/api_plugin.rst
doc/branch.rst
doc/changes.rst
doc/check_copied_from.py
doc/cmd.rst
doc/conf.py
doc/config.rst
doc/contexts.rst
doc/contributing.rst
doc/dbschema.rst
doc/dict.txt
doc/excluding.rst
doc/faq.rst
doc/howitworks.rst
doc/index.rst
doc/install.rst
doc/plugins.rst
doc/python-coverage.1.txt
doc/requirements.pip
doc/sleepy.rst
doc/source.rst
doc/subprocess.rst
doc/trouble.rst
doc/whatsnew5x.rst
doc/_static/coverage.css
doc/media/Tidelift_Logos_RGB_Tidelift_Shorthand_On-White.png
doc/media/Tidelift_Logos_RGB_Tidelift_Shorthand_On-White_small.png
doc/media/sleepy-snake-600.png
doc/media/sleepy-snake-circle-150.png
doc/sample_html/keybd_closed.png
doc/sample_html/keybd_open.png
requirements/ci.pip
requirements/dev.pip
requirements/pytest.pip
requirements/tox.pip
requirements/wheel.pip
tests/__init__.py
tests/conftest.py
tests/coveragetest.py
tests/covmodzip1.py
tests/goldtest.py
tests/helpers.py
tests/osinfo.py
tests/plugin1.py
tests/plugin2.py
tests/plugin_config.py
tests/stress_phystoken.tok
tests/stress_phystoken_dos.tok
tests/test_annotate.py
tests/test_api.py
tests/test_arcs.py
tests/test_backward.py
tests/test_cmdline.py
tests/test_collector.py
tests/test_concurrency.py
tests/test_config.py
tests/test_context.py
tests/test_coverage.py
tests/test_data.py
tests/test_debug.py
tests/test_execfile.py
tests/test_filereporter.py
tests/test_files.py
tests/test_html.py
tests/test_json.py
tests/test_misc.py
tests/test_numbits.py
tests/test_oddball.py
tests/test_parser.py
tests/test_phystokens.py
tests/test_plugins.py
tests/test_process.py
tests/test_python.py
tests/test_results.py
tests/test_setup.py
tests/test_summary.py
tests/test_templite.py
tests/test_testing.py
tests/test_version.py
tests/test_xml.py
tests/eggsrc/setup.py
tests/eggsrc/egg1/__init__.py
tests/eggsrc/egg1/egg1.py
tests/gold/README.rst
tests/gold/annotate/anno_dir/a___init__.py,cover
tests/gold/annotate/anno_dir/a_a.py,cover
tests/gold/annotate/anno_dir/b___init__.py,cover
tests/gold/annotate/anno_dir/b_b.py,cover
tests/gold/annotate/anno_dir/multi.py,cover
tests/gold/annotate/annotate/white.py,cover
tests/gold/annotate/encodings/utf8.py,cover
tests/gold/annotate/multi/multi.py,cover
tests/gold/annotate/multi/a/__init__.py,cover
tests/gold/annotate/multi/a/a.py,cover
tests/gold/annotate/multi/b/__init__.py,cover
tests/gold/annotate/multi/b/b.py,cover
tests/gold/html/Makefile
tests/gold/html/a/a_py.html
tests/gold/html/a/index.html
tests/gold/html/b_branch/b_py.html
tests/gold/html/b_branch/index.html
tests/gold/html/bom/bom_py.html
tests/gold/html/bom/index.html
tests/gold/html/bom/2/bom_py.html
tests/gold/html/bom/2/index.html
tests/gold/html/isolatin1/index.html
tests/gold/html/isolatin1/isolatin1_py.html
tests/gold/html/omit_1/index.html
tests/gold/html/omit_1/m1_py.html
tests/gold/html/omit_1/m2_py.html
tests/gold/html/omit_1/m3_py.html
tests/gold/html/omit_1/main_py.html
tests/gold/html/omit_2/index.html
tests/gold/html/omit_2/m2_py.html
tests/gold/html/omit_2/m3_py.html
tests/gold/html/omit_2/main_py.html
tests/gold/html/omit_3/index.html
tests/gold/html/omit_3/m3_py.html
tests/gold/html/omit_3/main_py.html
tests/gold/html/omit_4/index.html
tests/gold/html/omit_4/m1_py.html
tests/gold/html/omit_4/m3_py.html
tests/gold/html/omit_4/main_py.html
tests/gold/html/omit_5/index.html
tests/gold/html/omit_5/m1_py.html
tests/gold/html/omit_5/main_py.html
tests/gold/html/other/blah_blah_other_py.html
tests/gold/html/other/here_py.html
tests/gold/html/other/index.html
tests/gold/html/partial/index.html
tests/gold/html/partial/partial_py.html
tests/gold/html/styled/a_py.html
tests/gold/html/styled/extra.css
tests/gold/html/styled/index.html
tests/gold/html/styled/style.css
tests/gold/html/support/coverage_html.js
tests/gold/html/support/jquery.ba-throttle-debounce.min.js
tests/gold/html/support/jquery.hotkeys.js
tests/gold/html/support/jquery.isonscreen.js
tests/gold/html/support/jquery.min.js
tests/gold/html/support/jquery.tablesorter.min.js
tests/gold/html/support/keybd_closed.png
tests/gold/html/support/keybd_open.png
tests/gold/html/support/style.css
tests/gold/html/unicode/index.html
tests/gold/html/unicode/unicode_py.html
tests/gold/xml/x_xml/coverage.xml
tests/gold/xml/y_xml_branch/coverage.xml
tests/js/index.html
tests/js/tests.js
tests/modules/covmod1.py
tests/modules/runmod1.py
tests/modules/usepkgs.py
tests/modules/aa/__init__.py
tests/modules/aa/afile.odd.py
tests/modules/aa/afile.py
tests/modules/aa/zfile.py
tests/modules/aa/bb/__init__.py
tests/modules/aa/bb/bfile.odd.py
tests/modules/aa/bb/bfile.py
tests/modules/aa/bb.odd/bfile.py
tests/modules/aa/bb/cc/__init__.py
tests/modules/aa/bb/cc/cfile.py
tests/modules/namespace_420/sub1/__init__.py
tests/modules/pkg1/__init__.py
tests/modules/pkg1/__main__.py
tests/modules/pkg1/p1a.py
tests/modules/pkg1/p1b.py
tests/modules/pkg1/p1c.py
tests/modules/pkg1/runmod2.py
tests/modules/pkg1/sub/__init__.py
tests/modules/pkg1/sub/__main__.py
tests/modules/pkg1/sub/ps1a.py
tests/modules/pkg1/sub/runmod3.py
tests/modules/pkg2/__init__.py
tests/modules/pkg2/p2a.py
tests/modules/pkg2/p2b.py
tests/modules/plugins/__init__.py
tests/modules/plugins/a_plugin.py
tests/modules/plugins/another.py
tests/modules/process_test/__init__.py
tests/modules/process_test/try_execfile.py
tests/moremodules/namespace_420/sub2/__init__.py
tests/moremodules/othermods/__init__.py
tests/moremodules/othermods/othera.py
tests/moremodules/othermods/otherb.py
tests/moremodules/othermods/sub/__init__.py
tests/moremodules/othermods/sub/osa.py
tests/moremodules/othermods/sub/osb.py
tests/qunit/jquery.tmpl.min.js

View file

@ -1,5 +0,0 @@
[console_scripts]
coverage = coverage.cmdline:main
coverage-3.8 = coverage.cmdline:main
coverage3 = coverage.cmdline:main

View file

@ -1,3 +0,0 @@
[toml]
toml

View file

@ -1 +0,0 @@
coverage

View file

@ -1,143 +0,0 @@
Metadata-Version: 1.1
Name: esprima
Version: 4.0.1
Summary: ECMAScript parsing infrastructure for multipurpose analysis in Python
Home-page: https://github.com/Kronuz/esprima-python
Author: German M. Bravo (Kronuz)
Author-email: german.mb@gmail.com
License: BSD License
Description: |Donate| |PyPI Version| |PyPI License| |PyPI Format| |PyPI Status|
**Esprima** (`esprima.org <http://esprima.org>`__, BSD license) is a
high performance, standard-compliant
`ECMAScript <http://www.ecma-international.org/publications/standards/Ecma-262.htm>`__
parser officially written in ECMAScript (also popularly known as
`JavaScript <https://en.wikipedia.org/wiki/JavaScript>`__) and ported to
Python. Esprima is created and maintained by `Ariya
Hidayat <https://twitter.com/ariyahidayat>`__, with the help of `many
contributors <https://github.com/jquery/esprima/contributors>`__.
Python port is a line-by-line manual translation and was created and is
maintained by `German Mendez Bravo
(Kronuz) <https://twitter.com/germbravo>`__.
Features
~~~~~~~~
- Full support for ECMAScript 2017 (`ECMA-262 8th
Edition <http://www.ecma-international.org/publications/standards/Ecma-262.htm>`__)
- Sensible `syntax tree
format <https://github.com/estree/estree/blob/master/es5.md>`__ as
standardized by `ESTree project <https://github.com/estree/estree>`__
- Experimental support for `JSX <https://facebook.github.io/jsx/>`__, a
syntax extension for `React <https://facebook.github.io/react/>`__
- Optional tracking of syntax node location (index-based and
line-column)
- `Heavily tested <http://esprima.org/test/ci.html>`__ (~1500 `unit
tests <https://github.com/jquery/esprima/tree/master/test/fixtures>`__
with `full code
coverage <https://codecov.io/github/jquery/esprima>`__)
Installation
~~~~~~~~~~~~
.. code:: shell
pip install esprima
API
~~~
Esprima can be used to perform `lexical
analysis <https://en.wikipedia.org/wiki/Lexical_analysis>`__
(tokenization) or `syntactic
analysis <https://en.wikipedia.org/wiki/Parsing>`__ (parsing) of a
JavaScript program.
A simple example:
.. code:: javascript
>>> import esprima
>>> program = 'const answer = 42'
>>> esprima.tokenize(program)
[{
type: "Keyword",
value: "const"
}, {
type: "Identifier",
value: "answer"
}, {
type: "Punctuator",
value: "="
}, {
type: "Numeric",
value: "42"
}]
>>> esprima.parseScript(program)
{
body: [
{
kind: "const",
declarations: [
{
init: {
raw: "42",
type: "Literal",
value: 42
},
type: "VariableDeclarator",
id: {
type: "Identifier",
name: "answer"
}
}
],
type: "VariableDeclaration"
}
],
type: "Program",
sourceType: "script"
}
For more information, please read the `complete
documentation <http://esprima.org/doc>`__.
.. |Donate| image:: https://img.shields.io/badge/Donate-PayPal-green.svg
:target: https://www.paypal.me/Kronuz/25
.. |PyPI Version| image:: https://img.shields.io/pypi/v/esprima.svg
:target: https://pypi.python.org/pypi/esprima
.. |PyPI License| image:: https://img.shields.io/pypi/l/esprima.svg
:target: https://pypi.python.org/pypi/esprima
.. |PyPI Wheel| image:: https://img.shields.io/pypi/wheel/esprima.svg
:target: https://pypi.python.org/pypi/esprima
.. |PyPI Format| image:: https://img.shields.io/pypi/format/esprima.svg
:target: https://pypi.python.org/pypi/esprima
.. |PyPI Python Version| image:: https://img.shields.io/pypi/pyversions/esprima.svg
:target: https://pypi.python.org/pypi/esprima
.. |PyPI Implementation| image:: https://img.shields.io/pypi/implementation/esprima.svg
:target: https://pypi.python.org/pypi/esprima
.. |PyPI Status| image:: https://img.shields.io/pypi/status/esprima.svg
:target: https://pypi.python.org/pypi/esprima
.. |PyPI Downloads| image:: https://img.shields.io/pypi/dm/esprima.svg
:target: https://pypi.python.org/pypi/esprima
Keywords: esprima ecmascript javascript parser ast
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: BSD License
Classifier: Operating System :: OS Independent
Classifier: Topic :: Software Development :: Code Generators
Classifier: Topic :: Software Development :: Compilers
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Text Processing :: General
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6

View file

@ -1,29 +0,0 @@
README
setup.py
esprima/__init__.py
esprima/__main__.py
esprima/character.py
esprima/comment_handler.py
esprima/compat.py
esprima/error_handler.py
esprima/esprima.py
esprima/jsx_nodes.py
esprima/jsx_parser.py
esprima/jsx_syntax.py
esprima/messages.py
esprima/nodes.py
esprima/objects.py
esprima/parser.py
esprima/scanner.py
esprima/syntax.py
esprima/token.py
esprima/tokenizer.py
esprima/utils.py
esprima/visitor.py
esprima/xhtml_entities.py
esprima.egg-info/PKG-INFO
esprima.egg-info/SOURCES.txt
esprima.egg-info/dependency_links.txt
esprima.egg-info/entry_points.txt
esprima.egg-info/pbr.json
esprima.egg-info/top_level.txt

View file

@ -1,3 +0,0 @@
[console_scripts]
esprima = esprima.__main__:main

View file

@ -1 +0,0 @@
{"is_release": false, "git_version": "ac65290"}

View file

@ -1 +0,0 @@
esprima

View file

@ -1,18 +0,0 @@
Metadata-Version: 2.1
Name: fluent.migrate
Version: 0.11
Summary: Toolchain to migrate legacy translation to Fluent.
Home-page: https://hg.mozilla.org/l10n/fluent-migration/
Author: Mozilla
Author-email: l10n-drivers@mozilla.org
License: APL 2
Description: UNKNOWN
Keywords: fluent,localization,l10n
Platform: UNKNOWN
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3.7
Description-Content-Type: text/markdown
Provides-Extra: hg

View file

@ -1,23 +0,0 @@
README.md
setup.cfg
setup.py
fluent/__init__.py
fluent.migrate.egg-info/PKG-INFO
fluent.migrate.egg-info/SOURCES.txt
fluent.migrate.egg-info/dependency_links.txt
fluent.migrate.egg-info/entry_points.txt
fluent.migrate.egg-info/requires.txt
fluent.migrate.egg-info/top_level.txt
fluent/migrate/__init__.py
fluent/migrate/_context.py
fluent/migrate/blame.py
fluent/migrate/changesets.py
fluent/migrate/context.py
fluent/migrate/errors.py
fluent/migrate/evaluator.py
fluent/migrate/helpers.py
fluent/migrate/merge.py
fluent/migrate/tool.py
fluent/migrate/transforms.py
fluent/migrate/util.py
fluent/migrate/validator.py

View file

@ -1,4 +0,0 @@
[console_scripts]
migrate-l10n = fluent.migrate.tool:cli
validate-l10n-recipe = fluent.migrate.validator:cli

View file

@ -1,6 +0,0 @@
compare-locales<9.0,>=8.1
fluent.syntax<0.19,>=0.18.0
six
[hg]
python-hglib

View file

@ -1,81 +0,0 @@
Metadata-Version: 1.1
Name: idna-ssl
Version: 1.1.0
Summary: Patch ssl.match_hostname for Unicode(idna) domains support
Home-page: https://github.com/aio-libs/idna-ssl
Author: Victor Kovtun
Author-email: hellysmile@gmail.com
License: UNKNOWN
Description: idna-ssl
========
:info: Patch ssl.match_hostname for Unicode(idna) domains support
.. image:: https://travis-ci.com/aio-libs/idna-ssl.svg?branch=master
:target: https://travis-ci.com/aio-libs/idna-ssl
.. image:: https://img.shields.io/pypi/v/idna_ssl.svg
:target: https://pypi.python.org/pypi/idna_ssl
.. image:: https://codecov.io/gh/aio-libs/idna-ssl/branch/master/graph/badge.svg
:target: https://codecov.io/gh/aio-libs/idna-ssl
Installation
------------
.. code-block:: shell
pip install idna-ssl
Usage
-----
.. code-block:: python
from idna_ssl import patch_match_hostname # noqa isort:skip
patch_match_hostname() # noqa isort:skip
import asyncio
import aiohttp
URL = 'https://цфоут.мвд.рф/news/item/8065038/'
async def main():
async with aiohttp.ClientSession() as session:
async with session.get(URL) as response:
print(response)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
Motivation
----------
* Here is 100% backward capability
* Related aiohttp `issue <https://github.com/aio-libs/aiohttp/issues/949>`_
* Related Python `bug <https://bugs.python.org/issue31872>`_
* Related Python `pull request <https://github.com/python/cpython/pull/3462>`_
* It is fixed (by January 27 2018) in upcoming Python 3.7, but `IDNA2008 <https://tools.ietf.org/html/rfc5895>`_ is still broken
Thanks
------
The library was donated by `Ocean S.A. <https://ocean.io/>`_
Thanks to the company for contribution.
Keywords: ssl,Unicode,idna,match_hostname
Platform: UNKNOWN
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7

View file

@ -1,12 +0,0 @@
LICENSE
MANIFEST.in
README.rst
idna_ssl.py
setup.cfg
setup.py
idna_ssl.egg-info/PKG-INFO
idna_ssl.egg-info/SOURCES.txt
idna_ssl.egg-info/dependency_links.txt
idna_ssl.egg-info/not-zip-safe
idna_ssl.egg-info/requires.txt
idna_ssl.egg-info/top_level.txt

View file

@ -1 +0,0 @@
idna>=2.0

View file

@ -1 +0,0 @@
idna_ssl

View file

@ -1,117 +0,0 @@
Metadata-Version: 1.1
Name: jsmin
Version: 2.1.0
Summary: JavaScript minifier.
PLEASE UPDATE TO VERSION >= 2.0.6. Older versions have a serious bug related to comments.
Home-page: https://bitbucket.org/dcs/jsmin/
Author: Tikitu de Jager
Author-email: tikitu+jsmin@logophile.org
License: MIT License
Description: =====
jsmin
=====
JavaScript minifier.
Usage
=====
.. code:: python
from jsmin import jsmin
with open('myfile.js') as js_file:
minified = jsmin(js_file.read())
You can run it as a commandline tool also::
python -m jsmin myfile.js
As yet, ``jsmin`` makes no attempt to be compatible with
`ECMAScript 6 / ES.next / Harmony <http://wiki.ecmascript.org/doku.php?id=harmony:specification_drafts>`_.
If you're using it on Harmony code, though, you might find the ``quote_chars``
parameter useful:
.. code:: python
from jsmin import jsmin
with open('myfile.js') as js_file:
minified = jsmin(js_file.read(), quote_chars="'\"`")
Where to get it
===============
* install the package `from pypi <https://pypi.python.org/pypi/jsmin/>`_
* get the latest release `from the stable branch on bitbucket <https://bitbucket.org/dcs/jsmin/branch/stable>`_
* get the development version `from the default branch on bitbucket <https://bitbucket.org/dcs/jsmin/branch/default>`_
Contributing
============
`Issues <https://bitbucket.org/dcs/jsmin/issues>`_ and `Pull requests <https://bitbucket.org/dcs/jsmin/pull-requests>`_
will be gratefully received on Bitbucket. Pull requests on github are great too, but the issue tracker lives on
bitbucket.
If possible, please make separate pull requests for tests and for code: tests will be committed on the stable branch
(which tracks the latest released version) while code will go to default by, erm, default.
Unless you request otherwise, your Bitbucket identity will be added to the contributor's list below; if you prefer a
different name feel free to add it in your pull request instead. (If you prefer not to be mentioned you'll have to let
the maintainer know somehow.)
Build/test status
=================
Both default and stable branches are tested with Travis: https://travis-ci.org/tikitu/jsmin
Stable (latest released version plus any new tests) is tested against CPython 2.6, 2.7, 3.2, and 3.3.
Currently:
.. image:: https://travis-ci.org/tikitu/jsmin.png?branch=ghstable
If stable is failing that means there's a new test that fails on *the latest released version on pypi*, with no fix yet
released.
Default (development version, might be ahead of latest released version) is tested against CPython 2.6, 2.7, 3.2, and
3.3. Currently:
.. image:: https://travis-ci.org/tikitu/jsmin.png?branch=master
If default is failing don't use it, but as long as stable is passing the pypi release should be ok.
Contributors (chronological commit order)
=========================================
* `Dave St.Germain <https://bitbucket.org/dcs>`_ (original author)
* `Hans weltar <https://bitbucket.org/hansweltar>`_
* `Tikitu de Jager <mailto:tikitu+jsmin@logophile.org>`_ (current maintainer)
* https://bitbucket.org/rennat
* `Nick Alexander <https://bitbucket.org/ncalexan>`_
Changelog
=========
v2.1.0 (2014-12-24) Tikitu de Jager
-----------------------------------
* First changelog entries; see README.rst for prior contributors.
* Expose quote_chars parameter to provide just enough unofficial Harmony
support to be useful.
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Environment :: Web Environment
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.2
Classifier: Programming Language :: Python :: 3.3
Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content
Classifier: Topic :: Software Development :: Pre-processors
Classifier: Topic :: Text Processing :: Filters

View file

@ -1,13 +0,0 @@
CHANGELOG.txt
LICENSE.txt
MANIFEST.in
README.rst
setup.cfg
setup.py
jsmin/__init__.py
jsmin/__main__.py
jsmin/test.py
jsmin.egg-info/PKG-INFO
jsmin.egg-info/SOURCES.txt
jsmin.egg-info/dependency_links.txt
jsmin.egg-info/top_level.txt

View file

@ -1 +0,0 @@
jsmin

View file

@ -1,11 +0,0 @@
Metadata-Version: 2.1
Name: json-e
Version: 2.7.0
Summary: A data-structure parameterization system written for embedding context in JSON objects
Home-page: https://taskcluster.github.io/json-e/
Author: Dustin J. Mitchell
Author-email: dustin@mozilla.com
License: MPL2
Description: UNKNOWN
Platform: UNKNOWN
Provides-Extra: release

View file

@ -1,17 +0,0 @@
MANIFEST.in
README.md
package.json
setup.cfg
setup.py
json_e.egg-info/PKG-INFO
json_e.egg-info/SOURCES.txt
json_e.egg-info/dependency_links.txt
json_e.egg-info/requires.txt
json_e.egg-info/top_level.txt
jsone/__init__.py
jsone/builtins.py
jsone/interpreter.py
jsone/prattparser.py
jsone/render.py
jsone/shared.py
jsone/six.py

View file

@ -1,3 +0,0 @@
[release]
towncrier

View file

@ -1 +0,0 @@
jsone

View file

@ -1,19 +0,0 @@
Metadata-Version: 1.1
Name: mohawk
Version: 0.3.4
Summary: Library for Hawk HTTP authorization
Home-page: https://github.com/kumar303/mohawk
Author: Kumar McMillan, Austin King
Author-email: kumar.mcmillan@gmail.com
License: MPL 2.0 (Mozilla Public License)
Description: UNKNOWN
Platform: UNKNOWN
Classifier: Intended Audience :: Developers
Classifier: Natural Language :: English
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3.3
Classifier: Topic :: Internet :: WWW/HTTP

View file

@ -1,15 +0,0 @@
README.rst
setup.py
mohawk/__init__.py
mohawk/base.py
mohawk/bewit.py
mohawk/exc.py
mohawk/receiver.py
mohawk/sender.py
mohawk/tests.py
mohawk/util.py
mohawk.egg-info/PKG-INFO
mohawk.egg-info/SOURCES.txt
mohawk.egg-info/dependency_links.txt
mohawk.egg-info/requires.txt
mohawk.egg-info/top_level.txt

View file

@ -1 +0,0 @@
six

View file

@ -1 +0,0 @@
mohawk

View file

@ -1,128 +0,0 @@
Metadata-Version: 1.2
Name: multidict
Version: 5.1.0
Summary: multidict implementation
Home-page: https://github.com/aio-libs/multidict
Author: Andrew Svetlov
Author-email: andrew.svetlov@gmail.com
License: Apache 2
Project-URL: Chat: Gitter, https://gitter.im/aio-libs/Lobby
Project-URL: CI: Azure Pipelines, https://dev.azure.com/aio-libs/multidict/_build
Project-URL: Coverage: codecov, https://codecov.io/github/aio-libs/multidict
Project-URL: Docs: RTD, https://multidict.readthedocs.io
Project-URL: GitHub: issues, https://github.com/aio-libs/multidict/issues
Project-URL: GitHub: repo, https://github.com/aio-libs/multidict
Description: =========
multidict
=========
.. image:: https://github.com/aio-libs/multidict/workflows/CI/badge.svg
:target: https://github.com/aio-libs/multidict/actions?query=workflow%3ACI
:alt: GitHub status for master branch
.. image:: https://codecov.io/gh/aio-libs/multidict/branch/master/graph/badge.svg
:target: https://codecov.io/gh/aio-libs/multidict
:alt: Coverage metrics
.. image:: https://img.shields.io/pypi/v/multidict.svg
:target: https://pypi.org/project/multidict
:alt: PyPI
.. image:: https://readthedocs.org/projects/multidict/badge/?version=latest
:target: http://multidict.readthedocs.org/en/latest/?badge=latest
:alt: Documentationb
.. image:: https://img.shields.io/pypi/pyversions/multidict.svg
:target: https://pypi.org/project/multidict
:alt: Python versions
.. image:: https://badges.gitter.im/Join%20Chat.svg
:target: https://gitter.im/aio-libs/Lobby
:alt: Chat on Gitter
Multidict is dict-like collection of *key-value pairs* where key
might be occurred more than once in the container.
Introduction
------------
*HTTP Headers* and *URL query string* require specific data structure:
*multidict*. It behaves mostly like a regular ``dict`` but it may have
several *values* for the same *key* and *preserves insertion ordering*.
The *key* is ``str`` (or ``istr`` for case-insensitive dictionaries).
``multidict`` has four multidict classes:
``MultiDict``, ``MultiDictProxy``, ``CIMultiDict``
and ``CIMultiDictProxy``.
Immutable proxies (``MultiDictProxy`` and
``CIMultiDictProxy``) provide a dynamic view for the
proxied multidict, the view reflects underlying collection changes. They
implement the ``collections.abc.Mapping`` interface.
Regular mutable (``MultiDict`` and ``CIMultiDict``) classes
implement ``collections.abc.MutableMapping`` and allows to change
their own content.
*Case insensitive* (``CIMultiDict`` and
``CIMultiDictProxy``) ones assume the *keys* are case
insensitive, e.g.::
>>> dct = CIMultiDict(key='val')
>>> 'Key' in dct
True
>>> dct['Key']
'val'
*Keys* should be ``str`` or ``istr`` instances.
The library has optional C Extensions for sake of speed.
License
-------
Apache 2
Library Installation
--------------------
.. code-block:: bash
$ pip install multidict
The library is Python 3 only!
PyPI contains binary wheels for Linux, Windows and MacOS. If you want to install
``multidict`` on another operation system (or *Alpine Linux* inside a Docker) the
Tarball will be used to compile the library from sources. It requires C compiler and
Python headers installed.
To skip the compilation please use `MULTIDICT_NO_EXTENSIONS` environment variable,
e.g.:
.. code-block:: bash
$ MULTIDICT_NO_EXTENSIONS=1 pip install multidict
Please note, Pure Python (uncompiled) version is about 20-50 times slower depending on
the usage scenario!!!
Changelog
---------
See `RTD page <http://multidict.readthedocs.org/en/latest/changes.html>`_.
Platform: UNKNOWN
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Development Status :: 5 - Production/Stable
Requires-Python: >=3.6

View file

@ -1,71 +0,0 @@
CHANGES.rst
LICENSE
MANIFEST.in
Makefile
README.rst
pyproject.toml
setup.cfg
setup.py
docs/Makefile
docs/benchmark.rst
docs/changes.rst
docs/conf.py
docs/index.rst
docs/make.bat
docs/multidict.rst
docs/spelling_wordlist.txt
multidict/__init__.py
multidict/__init__.pyi
multidict/_abc.py
multidict/_compat.py
multidict/_multidict.c
multidict/_multidict_base.py
multidict/_multidict_py.py
multidict/py.typed
multidict.egg-info/PKG-INFO
multidict.egg-info/SOURCES.txt
multidict.egg-info/dependency_links.txt
multidict.egg-info/top_level.txt
multidict/_multilib/defs.h
multidict/_multilib/dict.h
multidict/_multilib/istr.h
multidict/_multilib/iter.h
multidict/_multilib/pair_list.h
multidict/_multilib/views.h
tests/cimultidict.pickle.0
tests/cimultidict.pickle.1
tests/cimultidict.pickle.2
tests/cimultidict.pickle.3
tests/cimultidict.pickle.4
tests/cimultidict.pickle.5
tests/conftest.py
tests/gen_pickles.py
tests/multidict.pickle.0
tests/multidict.pickle.1
tests/multidict.pickle.2
tests/multidict.pickle.3
tests/multidict.pickle.4
tests/multidict.pickle.5
tests/pycimultidict.pickle.0
tests/pycimultidict.pickle.1
tests/pycimultidict.pickle.2
tests/pycimultidict.pickle.3
tests/pycimultidict.pickle.4
tests/pycimultidict.pickle.5
tests/pymultidict.pickle.0
tests/pymultidict.pickle.1
tests/pymultidict.pickle.2
tests/pymultidict.pickle.3
tests/pymultidict.pickle.4
tests/pymultidict.pickle.5
tests/test_abc.py
tests/test_copy.py
tests/test_guard.py
tests/test_istr.py
tests/test_multidict.py
tests/test_mutable_multidict.py
tests/test_mypy.py
tests/test_pickle.py
tests/test_types.py
tests/test_update.py
tests/test_version.py

View file

@ -1 +0,0 @@
multidict

View file

@ -1,3 +0,0 @@
This software is made available under the terms of *either* of the licenses
found in LICENSE.APACHE or LICENSE.BSD. Contributions to this software is made
under the terms of *both* these licenses.

View file

@ -1,177 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS

View file

@ -1,23 +0,0 @@
Copyright (c) Donald Stufft and individual contributors.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View file

@ -1,425 +0,0 @@
Metadata-Version: 2.1
Name: packaging
Version: 21.0
Summary: Core utilities for Python packages
Home-page: https://github.com/pypa/packaging
Author: Donald Stufft and individual contributors
Author-email: donald@stufft.io
License: BSD-2-Clause or Apache-2.0
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: License :: OSI Approved :: BSD License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Requires-Python: >=3.6
Description-Content-Type: text/x-rst
License-File: LICENSE
License-File: LICENSE.APACHE
License-File: LICENSE.BSD
Requires-Dist: pyparsing (>=2.0.2)
packaging
=========
.. start-intro
Reusable core utilities for various Python Packaging
`interoperability specifications <https://packaging.python.org/specifications/>`_.
This library provides utilities that implement the interoperability
specifications which have clearly one correct behaviour (eg: :pep:`440`)
or benefit greatly from having a single shared implementation (eg: :pep:`425`).
.. end-intro
The ``packaging`` project includes the following: version handling, specifiers,
markers, requirements, tags, utilities.
Documentation
-------------
The `documentation`_ provides information and the API for the following:
- Version Handling
- Specifiers
- Markers
- Requirements
- Tags
- Utilities
Installation
------------
Use ``pip`` to install these utilities::
pip install packaging
Discussion
----------
If you run into bugs, you can file them in our `issue tracker`_.
You can also join ``#pypa`` on Freenode to ask questions or get involved.
.. _`documentation`: https://packaging.pypa.io/
.. _`issue tracker`: https://github.com/pypa/packaging/issues
Code of Conduct
---------------
Everyone interacting in the packaging project's codebases, issue trackers, chat
rooms, and mailing lists is expected to follow the `PSF Code of Conduct`_.
.. _PSF Code of Conduct: https://github.com/pypa/.github/blob/main/CODE_OF_CONDUCT.md
Contributing
------------
The ``CONTRIBUTING.rst`` file outlines how to contribute to this project as
well as how to report a potential security issue. The documentation for this
project also covers information about `project development`_ and `security`_.
.. _`project development`: https://packaging.pypa.io/en/latest/development/
.. _`security`: https://packaging.pypa.io/en/latest/security/
Project History
---------------
Please review the ``CHANGELOG.rst`` file or the `Changelog documentation`_ for
recent changes and project history.
.. _`Changelog documentation`: https://packaging.pypa.io/en/latest/changelog/
Changelog
---------
21.0 - 2021-07-03
~~~~~~~~~~~~~~~~~
* `packaging` is now only compatible with Python 3.6 and above.
* Add support for zip files in ``parse_sdist_filename`` (`#429 <https://github.com/pypa/packaging/issues/429>`__)
20.9 - 2021-01-29
~~~~~~~~~~~~~~~~~
* Run `isort <https://pypi.org/project/isort/>`_ over the code base (`#377 <https://github.com/pypa/packaging/issues/377>`__)
* Add support for the ``macosx_10_*_universal2`` platform tags (`#379 <https://github.com/pypa/packaging/issues/379>`__)
* Introduce ``packaging.utils.parse_wheel_filename()`` and ``parse_sdist_filename()``
(`#387 <https://github.com/pypa/packaging/issues/387>`__ and `#389 <https://github.com/pypa/packaging/issues/389>`__)
20.8 - 2020-12-11
~~~~~~~~~~~~~~~~~
* Revert back to setuptools for compatibility purposes for some Linux distros (`#363 <https://github.com/pypa/packaging/issues/363>`__)
* Do not insert an underscore in wheel tags when the interpreter version number
is more than 2 digits (`#372 <https://github.com/pypa/packaging/issues/372>`__)
20.7 - 2020-11-28
~~~~~~~~~~~~~~~~~
No unreleased changes.
20.6 - 2020-11-28
~~~~~~~~~~~~~~~~~
.. note:: This release was subsequently yanked, and these changes were included in 20.7.
* Fix flit configuration, to include LICENSE files (`#357 <https://github.com/pypa/packaging/issues/357>`__)
* Make `intel` a recognized CPU architecture for the `universal` macOS platform tag (`#361 <https://github.com/pypa/packaging/issues/361>`__)
* Add some missing type hints to `packaging.requirements` (issue:`350`)
20.5 - 2020-11-27
~~~~~~~~~~~~~~~~~
* Officially support Python 3.9 (`#343 <https://github.com/pypa/packaging/issues/343>`__)
* Deprecate the ``LegacyVersion`` and ``LegacySpecifier`` classes (`#321 <https://github.com/pypa/packaging/issues/321>`__)
* Handle ``OSError`` on non-dynamic executables when attempting to resolve
the glibc version string.
20.4 - 2020-05-19
~~~~~~~~~~~~~~~~~
* Canonicalize version before comparing specifiers. (`#282 <https://github.com/pypa/packaging/issues/282>`__)
* Change type hint for ``canonicalize_name`` to return
``packaging.utils.NormalizedName``.
This enables the use of static typing tools (like mypy) to detect mixing of
normalized and un-normalized names.
20.3 - 2020-03-05
~~~~~~~~~~~~~~~~~
* Fix changelog for 20.2.
20.2 - 2020-03-05
~~~~~~~~~~~~~~~~~
* Fix a bug that caused a 32-bit OS that runs on a 64-bit ARM CPU (e.g. ARM-v8,
aarch64), to report the wrong bitness.
20.1 - 2020-01-24
~~~~~~~~~~~~~~~~~~~
* Fix a bug caused by reuse of an exhausted iterator. (`#257 <https://github.com/pypa/packaging/issues/257>`__)
20.0 - 2020-01-06
~~~~~~~~~~~~~~~~~
* Add type hints (`#191 <https://github.com/pypa/packaging/issues/191>`__)
* Add proper trove classifiers for PyPy support (`#198 <https://github.com/pypa/packaging/issues/198>`__)
* Scale back depending on ``ctypes`` for manylinux support detection (`#171 <https://github.com/pypa/packaging/issues/171>`__)
* Use ``sys.implementation.name`` where appropriate for ``packaging.tags`` (`#193 <https://github.com/pypa/packaging/issues/193>`__)
* Expand upon the API provided by ``packaging.tags``: ``interpreter_name()``, ``mac_platforms()``, ``compatible_tags()``, ``cpython_tags()``, ``generic_tags()`` (`#187 <https://github.com/pypa/packaging/issues/187>`__)
* Officially support Python 3.8 (`#232 <https://github.com/pypa/packaging/issues/232>`__)
* Add ``major``, ``minor``, and ``micro`` aliases to ``packaging.version.Version`` (`#226 <https://github.com/pypa/packaging/issues/226>`__)
* Properly mark ``packaging`` has being fully typed by adding a `py.typed` file (`#226 <https://github.com/pypa/packaging/issues/226>`__)
19.2 - 2019-09-18
~~~~~~~~~~~~~~~~~
* Remove dependency on ``attrs`` (`#178 <https://github.com/pypa/packaging/issues/178>`__, `#179 <https://github.com/pypa/packaging/issues/179>`__)
* Use appropriate fallbacks for CPython ABI tag (`#181 <https://github.com/pypa/packaging/issues/181>`__, `#185 <https://github.com/pypa/packaging/issues/185>`__)
* Add manylinux2014 support (`#186 <https://github.com/pypa/packaging/issues/186>`__)
* Improve ABI detection (`#181 <https://github.com/pypa/packaging/issues/181>`__)
* Properly handle debug wheels for Python 3.8 (`#172 <https://github.com/pypa/packaging/issues/172>`__)
* Improve detection of debug builds on Windows (`#194 <https://github.com/pypa/packaging/issues/194>`__)
19.1 - 2019-07-30
~~~~~~~~~~~~~~~~~
* Add the ``packaging.tags`` module. (`#156 <https://github.com/pypa/packaging/issues/156>`__)
* Correctly handle two-digit versions in ``python_version`` (`#119 <https://github.com/pypa/packaging/issues/119>`__)
19.0 - 2019-01-20
~~~~~~~~~~~~~~~~~
* Fix string representation of PEP 508 direct URL requirements with markers.
* Better handling of file URLs
This allows for using ``file:///absolute/path``, which was previously
prevented due to the missing ``netloc``.
This allows for all file URLs that ``urlunparse`` turns back into the
original URL to be valid.
18.0 - 2018-09-26
~~~~~~~~~~~~~~~~~
* Improve error messages when invalid requirements are given. (`#129 <https://github.com/pypa/packaging/issues/129>`__)
17.1 - 2017-02-28
~~~~~~~~~~~~~~~~~
* Fix ``utils.canonicalize_version`` when supplying non PEP 440 versions.
17.0 - 2017-02-28
~~~~~~~~~~~~~~~~~
* Drop support for python 2.6, 3.2, and 3.3.
* Define minimal pyparsing version to 2.0.2 (`#91 <https://github.com/pypa/packaging/issues/91>`__).
* Add ``epoch``, ``release``, ``pre``, ``dev``, and ``post`` attributes to
``Version`` and ``LegacyVersion`` (`#34 <https://github.com/pypa/packaging/issues/34>`__).
* Add ``Version().is_devrelease`` and ``LegacyVersion().is_devrelease`` to
make it easy to determine if a release is a development release.
* Add ``utils.canonicalize_version`` to canonicalize version strings or
``Version`` instances (`#121 <https://github.com/pypa/packaging/issues/121>`__).
16.8 - 2016-10-29
~~~~~~~~~~~~~~~~~
* Fix markers that utilize ``in`` so that they render correctly.
* Fix an erroneous test on Python RC releases.
16.7 - 2016-04-23
~~~~~~~~~~~~~~~~~
* Add support for the deprecated ``python_implementation`` marker which was
an undocumented setuptools marker in addition to the newer markers.
16.6 - 2016-03-29
~~~~~~~~~~~~~~~~~
* Add support for the deprecated, PEP 345 environment markers in addition to
the newer markers.
16.5 - 2016-02-26
~~~~~~~~~~~~~~~~~
* Fix a regression in parsing requirements with whitespaces between the comma
separators.
16.4 - 2016-02-22
~~~~~~~~~~~~~~~~~
* Fix a regression in parsing requirements like ``foo (==4)``.
16.3 - 2016-02-21
~~~~~~~~~~~~~~~~~
* Fix a bug where ``packaging.requirements:Requirement`` was overly strict when
matching legacy requirements.
16.2 - 2016-02-09
~~~~~~~~~~~~~~~~~
* Add a function that implements the name canonicalization from PEP 503.
16.1 - 2016-02-07
~~~~~~~~~~~~~~~~~
* Implement requirement specifiers from PEP 508.
16.0 - 2016-01-19
~~~~~~~~~~~~~~~~~
* Relicense so that packaging is available under *either* the Apache License,
Version 2.0 or a 2 Clause BSD license.
* Support installation of packaging when only distutils is available.
* Fix ``==`` comparison when there is a prefix and a local version in play.
(`#41 <https://github.com/pypa/packaging/issues/41>`__).
* Implement environment markers from PEP 508.
15.3 - 2015-08-01
~~~~~~~~~~~~~~~~~
* Normalize post-release spellings for rev/r prefixes. `#35 <https://github.com/pypa/packaging/issues/35>`__
15.2 - 2015-05-13
~~~~~~~~~~~~~~~~~
* Fix an error where the arbitrary specifier (``===``) was not correctly
allowing pre-releases when it was being used.
* Expose the specifier and version parts through properties on the
``Specifier`` classes.
* Allow iterating over the ``SpecifierSet`` to get access to all of the
``Specifier`` instances.
* Allow testing if a version is contained within a specifier via the ``in``
operator.
15.1 - 2015-04-13
~~~~~~~~~~~~~~~~~
* Fix a logic error that was causing inconsistent answers about whether or not
a pre-release was contained within a ``SpecifierSet`` or not.
15.0 - 2015-01-02
~~~~~~~~~~~~~~~~~
* Add ``Version().is_postrelease`` and ``LegacyVersion().is_postrelease`` to
make it easy to determine if a release is a post release.
* Add ``Version().base_version`` and ``LegacyVersion().base_version`` to make
it easy to get the public version without any pre or post release markers.
* Support the update to PEP 440 which removed the implied ``!=V.*`` when using
either ``>V`` or ``<V`` and which instead special cased the handling of
pre-releases, post-releases, and local versions when using ``>V`` or ``<V``.
14.5 - 2014-12-17
~~~~~~~~~~~~~~~~~
* Normalize release candidates as ``rc`` instead of ``c``.
* Expose the ``VERSION_PATTERN`` constant, a regular expression matching
a valid version.
14.4 - 2014-12-15
~~~~~~~~~~~~~~~~~
* Ensure that versions are normalized before comparison when used in a
specifier with a less than (``<``) or greater than (``>``) operator.
14.3 - 2014-11-19
~~~~~~~~~~~~~~~~~
* **BACKWARDS INCOMPATIBLE** Refactor specifier support so that it can sanely
handle legacy specifiers as well as PEP 440 specifiers.
* **BACKWARDS INCOMPATIBLE** Move the specifier support out of
``packaging.version`` into ``packaging.specifiers``.
14.2 - 2014-09-10
~~~~~~~~~~~~~~~~~
* Add prerelease support to ``Specifier``.
* Remove the ability to do ``item in Specifier()`` and replace it with
``Specifier().contains(item)`` in order to allow flags that signal if a
prerelease should be accepted or not.
* Add a method ``Specifier().filter()`` which will take an iterable and returns
an iterable with items that do not match the specifier filtered out.
14.1 - 2014-09-08
~~~~~~~~~~~~~~~~~
* Allow ``LegacyVersion`` and ``Version`` to be sorted together.
* Add ``packaging.version.parse()`` to enable easily parsing a version string
as either a ``Version`` or a ``LegacyVersion`` depending on it's PEP 440
validity.
14.0 - 2014-09-05
~~~~~~~~~~~~~~~~~
* Initial release.
.. _`master`: https://github.com/pypa/packaging/

View file

@ -1,19 +0,0 @@
packaging/__about__.py,sha256=p_OQloqH2saadcbUQmWEsWK857dI6_ff5E3aSiCqGFA,661
packaging/__init__.py,sha256=b9Kk5MF7KxhhLgcDmiUWukN-LatWFxPdNug0joPhHSk,497
packaging/_manylinux.py,sha256=XcbiXB-qcjv3bcohp6N98TMpOP4_j3m-iOA8ptK2GWY,11488
packaging/_musllinux.py,sha256=z5yeG1ygOPx4uUyLdqj-p8Dk5UBb5H_b0NIjW9yo8oA,4378
packaging/_structures.py,sha256=TMiAgFbdUOPmIfDIfiHc3KFhSJ8kMjof2QS5I-2NyQ8,1629
packaging/markers.py,sha256=Fygi3_eZnjQ-3VJizW5AhI5wvo0Hb6RMk4DidsKpOC0,8475
packaging/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
packaging/requirements.py,sha256=rjaGRCMepZS1mlYMjJ5Qh6rfq3gtsCRQUQmftGZ_bu8,4664
packaging/specifiers.py,sha256=MZ-fYcNL3u7pNrt-6g2EQO7AbRXkjc-SPEYwXMQbLmc,30964
packaging/tags.py,sha256=akIerYw8W0sz4OW9HHozgawWnbt2GGOPm3sviW0jowY,15714
packaging/utils.py,sha256=dJjeat3BS-TYn1RrUFVwufUMasbtzLfYRoy_HXENeFQ,4200
packaging/version.py,sha256=_fLRNrFrxYcHVfyo8vk9j8s6JM8N_xsSxVFr6RJyco8,14665
packaging-21.0.dist-info/LICENSE,sha256=ytHvW9NA1z4HS6YU0m996spceUDD2MNIUuZcSQlobEg,197
packaging-21.0.dist-info/LICENSE.APACHE,sha256=DVQuDIgE45qn836wDaWnYhSdxoLXgpRRKH4RuTjpRZQ,10174
packaging-21.0.dist-info/LICENSE.BSD,sha256=tw5-m3QvHMb5SLNMFqo5_-zpQZY2S8iP8NIYDwAo-sU,1344
packaging-21.0.dist-info/METADATA,sha256=ZV4MesCjT-YxFEJvLzsJ31kKmmj4ltiMUl3JvqxJfqI,13418
packaging-21.0.dist-info/WHEEL,sha256=OqRkF0eY5GHssMorFjlbTIq072vpHpF60fIQA6lS9xA,92
packaging-21.0.dist-info/top_level.txt,sha256=zFdHrhWnPslzsiP455HutQsqPB6v0KCtNUMtUtrefDw,10
packaging-21.0.dist-info/RECORD,,

View file

@ -1,5 +0,0 @@
Wheel-Version: 1.0
Generator: bdist_wheel (0.36.2)
Root-Is-Purelib: true
Tag: py3-none-any

View file

@ -1 +0,0 @@
packaging

View file

@ -1,26 +0,0 @@
# This file is dual licensed under the terms of the Apache License, Version
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
# for complete details.
__all__ = [
"__title__",
"__summary__",
"__uri__",
"__version__",
"__author__",
"__email__",
"__license__",
"__copyright__",
]
__title__ = "packaging"
__summary__ = "Core utilities for Python packages"
__uri__ = "https://github.com/pypa/packaging"
__version__ = "21.0"
__author__ = "Donald Stufft and individual contributors"
__email__ = "donald@stufft.io"
__license__ = "BSD-2-Clause or Apache-2.0"
__copyright__ = "2014-2019 %s" % __author__

View file

@ -1,25 +0,0 @@
# This file is dual licensed under the terms of the Apache License, Version
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
# for complete details.
from .__about__ import (
__author__,
__copyright__,
__email__,
__license__,
__summary__,
__title__,
__uri__,
__version__,
)
__all__ = [
"__title__",
"__summary__",
"__uri__",
"__version__",
"__author__",
"__email__",
"__license__",
"__copyright__",
]

View file

@ -1,301 +0,0 @@
import collections
import functools
import os
import re
import struct
import sys
import warnings
from typing import IO, Dict, Iterator, NamedTuple, Optional, Tuple
# Python does not provide platform information at sufficient granularity to
# identify the architecture of the running executable in some cases, so we
# determine it dynamically by reading the information from the running
# process. This only applies on Linux, which uses the ELF format.
class _ELFFileHeader:
# https://en.wikipedia.org/wiki/Executable_and_Linkable_Format#File_header
class _InvalidELFFileHeader(ValueError):
"""
An invalid ELF file header was found.
"""
ELF_MAGIC_NUMBER = 0x7F454C46
ELFCLASS32 = 1
ELFCLASS64 = 2
ELFDATA2LSB = 1
ELFDATA2MSB = 2
EM_386 = 3
EM_S390 = 22
EM_ARM = 40
EM_X86_64 = 62
EF_ARM_ABIMASK = 0xFF000000
EF_ARM_ABI_VER5 = 0x05000000
EF_ARM_ABI_FLOAT_HARD = 0x00000400
def __init__(self, file: IO[bytes]) -> None:
def unpack(fmt: str) -> int:
try:
data = file.read(struct.calcsize(fmt))
result: Tuple[int, ...] = struct.unpack(fmt, data)
except struct.error:
raise _ELFFileHeader._InvalidELFFileHeader()
return result[0]
self.e_ident_magic = unpack(">I")
if self.e_ident_magic != self.ELF_MAGIC_NUMBER:
raise _ELFFileHeader._InvalidELFFileHeader()
self.e_ident_class = unpack("B")
if self.e_ident_class not in {self.ELFCLASS32, self.ELFCLASS64}:
raise _ELFFileHeader._InvalidELFFileHeader()
self.e_ident_data = unpack("B")
if self.e_ident_data not in {self.ELFDATA2LSB, self.ELFDATA2MSB}:
raise _ELFFileHeader._InvalidELFFileHeader()
self.e_ident_version = unpack("B")
self.e_ident_osabi = unpack("B")
self.e_ident_abiversion = unpack("B")
self.e_ident_pad = file.read(7)
format_h = "<H" if self.e_ident_data == self.ELFDATA2LSB else ">H"
format_i = "<I" if self.e_ident_data == self.ELFDATA2LSB else ">I"
format_q = "<Q" if self.e_ident_data == self.ELFDATA2LSB else ">Q"
format_p = format_i if self.e_ident_class == self.ELFCLASS32 else format_q
self.e_type = unpack(format_h)
self.e_machine = unpack(format_h)
self.e_version = unpack(format_i)
self.e_entry = unpack(format_p)
self.e_phoff = unpack(format_p)
self.e_shoff = unpack(format_p)
self.e_flags = unpack(format_i)
self.e_ehsize = unpack(format_h)
self.e_phentsize = unpack(format_h)
self.e_phnum = unpack(format_h)
self.e_shentsize = unpack(format_h)
self.e_shnum = unpack(format_h)
self.e_shstrndx = unpack(format_h)
def _get_elf_header() -> Optional[_ELFFileHeader]:
try:
with open(sys.executable, "rb") as f:
elf_header = _ELFFileHeader(f)
except (OSError, TypeError, _ELFFileHeader._InvalidELFFileHeader):
return None
return elf_header
def _is_linux_armhf() -> bool:
# hard-float ABI can be detected from the ELF header of the running
# process
# https://static.docs.arm.com/ihi0044/g/aaelf32.pdf
elf_header = _get_elf_header()
if elf_header is None:
return False
result = elf_header.e_ident_class == elf_header.ELFCLASS32
result &= elf_header.e_ident_data == elf_header.ELFDATA2LSB
result &= elf_header.e_machine == elf_header.EM_ARM
result &= (
elf_header.e_flags & elf_header.EF_ARM_ABIMASK
) == elf_header.EF_ARM_ABI_VER5
result &= (
elf_header.e_flags & elf_header.EF_ARM_ABI_FLOAT_HARD
) == elf_header.EF_ARM_ABI_FLOAT_HARD
return result
def _is_linux_i686() -> bool:
elf_header = _get_elf_header()
if elf_header is None:
return False
result = elf_header.e_ident_class == elf_header.ELFCLASS32
result &= elf_header.e_ident_data == elf_header.ELFDATA2LSB
result &= elf_header.e_machine == elf_header.EM_386
return result
def _have_compatible_abi(arch: str) -> bool:
if arch == "armv7l":
return _is_linux_armhf()
if arch == "i686":
return _is_linux_i686()
return arch in {"x86_64", "aarch64", "ppc64", "ppc64le", "s390x"}
# If glibc ever changes its major version, we need to know what the last
# minor version was, so we can build the complete list of all versions.
# For now, guess what the highest minor version might be, assume it will
# be 50 for testing. Once this actually happens, update the dictionary
# with the actual value.
_LAST_GLIBC_MINOR: Dict[int, int] = collections.defaultdict(lambda: 50)
class _GLibCVersion(NamedTuple):
major: int
minor: int
def _glibc_version_string_confstr() -> Optional[str]:
"""
Primary implementation of glibc_version_string using os.confstr.
"""
# os.confstr is quite a bit faster than ctypes.DLL. It's also less likely
# to be broken or missing. This strategy is used in the standard library
# platform module.
# https://github.com/python/cpython/blob/fcf1d003bf4f0100c/Lib/platform.py#L175-L183
try:
# os.confstr("CS_GNU_LIBC_VERSION") returns a string like "glibc 2.17".
version_string = os.confstr("CS_GNU_LIBC_VERSION")
assert version_string is not None
_, version = version_string.split()
except (AssertionError, AttributeError, OSError, ValueError):
# os.confstr() or CS_GNU_LIBC_VERSION not available (or a bad value)...
return None
return version
def _glibc_version_string_ctypes() -> Optional[str]:
"""
Fallback implementation of glibc_version_string using ctypes.
"""
try:
import ctypes
except ImportError:
return None
# ctypes.CDLL(None) internally calls dlopen(NULL), and as the dlopen
# manpage says, "If filename is NULL, then the returned handle is for the
# main program". This way we can let the linker do the work to figure out
# which libc our process is actually using.
#
# We must also handle the special case where the executable is not a
# dynamically linked executable. This can occur when using musl libc,
# for example. In this situation, dlopen() will error, leading to an
# OSError. Interestingly, at least in the case of musl, there is no
# errno set on the OSError. The single string argument used to construct
# OSError comes from libc itself and is therefore not portable to
# hard code here. In any case, failure to call dlopen() means we
# can proceed, so we bail on our attempt.
try:
process_namespace = ctypes.CDLL(None)
except OSError:
return None
try:
gnu_get_libc_version = process_namespace.gnu_get_libc_version
except AttributeError:
# Symbol doesn't exist -> therefore, we are not linked to
# glibc.
return None
# Call gnu_get_libc_version, which returns a string like "2.5"
gnu_get_libc_version.restype = ctypes.c_char_p
version_str: str = gnu_get_libc_version()
# py2 / py3 compatibility:
if not isinstance(version_str, str):
version_str = version_str.decode("ascii")
return version_str
def _glibc_version_string() -> Optional[str]:
"""Returns glibc version string, or None if not using glibc."""
return _glibc_version_string_confstr() or _glibc_version_string_ctypes()
def _parse_glibc_version(version_str: str) -> Tuple[int, int]:
"""Parse glibc version.
We use a regexp instead of str.split because we want to discard any
random junk that might come after the minor version -- this might happen
in patched/forked versions of glibc (e.g. Linaro's version of glibc
uses version strings like "2.20-2014.11"). See gh-3588.
"""
m = re.match(r"(?P<major>[0-9]+)\.(?P<minor>[0-9]+)", version_str)
if not m:
warnings.warn(
"Expected glibc version with 2 components major.minor,"
" got: %s" % version_str,
RuntimeWarning,
)
return -1, -1
return int(m.group("major")), int(m.group("minor"))
@functools.lru_cache()
def _get_glibc_version() -> Tuple[int, int]:
version_str = _glibc_version_string()
if version_str is None:
return (-1, -1)
return _parse_glibc_version(version_str)
# From PEP 513, PEP 600
def _is_compatible(name: str, arch: str, version: _GLibCVersion) -> bool:
sys_glibc = _get_glibc_version()
if sys_glibc < version:
return False
# Check for presence of _manylinux module.
try:
import _manylinux # noqa
except ImportError:
return True
if hasattr(_manylinux, "manylinux_compatible"):
result = _manylinux.manylinux_compatible(version[0], version[1], arch)
if result is not None:
return bool(result)
return True
if version == _GLibCVersion(2, 5):
if hasattr(_manylinux, "manylinux1_compatible"):
return bool(_manylinux.manylinux1_compatible)
if version == _GLibCVersion(2, 12):
if hasattr(_manylinux, "manylinux2010_compatible"):
return bool(_manylinux.manylinux2010_compatible)
if version == _GLibCVersion(2, 17):
if hasattr(_manylinux, "manylinux2014_compatible"):
return bool(_manylinux.manylinux2014_compatible)
return True
_LEGACY_MANYLINUX_MAP = {
# CentOS 7 w/ glibc 2.17 (PEP 599)
(2, 17): "manylinux2014",
# CentOS 6 w/ glibc 2.12 (PEP 571)
(2, 12): "manylinux2010",
# CentOS 5 w/ glibc 2.5 (PEP 513)
(2, 5): "manylinux1",
}
def platform_tags(linux: str, arch: str) -> Iterator[str]:
if not _have_compatible_abi(arch):
return
# Oldest glibc to be supported regardless of architecture is (2, 17).
too_old_glibc2 = _GLibCVersion(2, 16)
if arch in {"x86_64", "i686"}:
# On x86/i686 also oldest glibc to be supported is (2, 5).
too_old_glibc2 = _GLibCVersion(2, 4)
current_glibc = _GLibCVersion(*_get_glibc_version())
glibc_max_list = [current_glibc]
# We can assume compatibility across glibc major versions.
# https://sourceware.org/bugzilla/show_bug.cgi?id=24636
#
# Build a list of maximum glibc versions so that we can
# output the canonical list of all glibc from current_glibc
# down to too_old_glibc2, including all intermediary versions.
for glibc_major in range(current_glibc.major - 1, 1, -1):
glibc_minor = _LAST_GLIBC_MINOR[glibc_major]
glibc_max_list.append(_GLibCVersion(glibc_major, glibc_minor))
for glibc_max in glibc_max_list:
if glibc_max.major == too_old_glibc2.major:
min_minor = too_old_glibc2.minor
else:
# For other glibc major versions oldest supported is (x, 0).
min_minor = -1
for glibc_minor in range(glibc_max.minor, min_minor, -1):
glibc_version = _GLibCVersion(glibc_max.major, glibc_minor)
tag = "manylinux_{}_{}".format(*glibc_version)
if _is_compatible(tag, arch, glibc_version):
yield linux.replace("linux", tag)
# Handle the legacy manylinux1, manylinux2010, manylinux2014 tags.
if glibc_version in _LEGACY_MANYLINUX_MAP:
legacy_tag = _LEGACY_MANYLINUX_MAP[glibc_version]
if _is_compatible(legacy_tag, arch, glibc_version):
yield linux.replace("linux", legacy_tag)

View file

@ -1,136 +0,0 @@
"""PEP 656 support.
This module implements logic to detect if the currently running Python is
linked against musl, and what musl version is used.
"""
import contextlib
import functools
import operator
import os
import re
import struct
import subprocess
import sys
from typing import IO, Iterator, NamedTuple, Optional, Tuple
def _read_unpacked(f: IO[bytes], fmt: str) -> Tuple[int, ...]:
return struct.unpack(fmt, f.read(struct.calcsize(fmt)))
def _parse_ld_musl_from_elf(f: IO[bytes]) -> Optional[str]:
"""Detect musl libc location by parsing the Python executable.
Based on: https://gist.github.com/lyssdod/f51579ae8d93c8657a5564aefc2ffbca
ELF header: https://refspecs.linuxfoundation.org/elf/gabi4+/ch4.eheader.html
"""
f.seek(0)
try:
ident = _read_unpacked(f, "16B")
except struct.error:
return None
if ident[:4] != tuple(b"\x7fELF"): # Invalid magic, not ELF.
return None
f.seek(struct.calcsize("HHI"), 1) # Skip file type, machine, and version.
try:
# e_fmt: Format for program header.
# p_fmt: Format for section header.
# p_idx: Indexes to find p_type, p_offset, and p_filesz.
e_fmt, p_fmt, p_idx = {
1: ("IIIIHHH", "IIIIIIII", (0, 1, 4)), # 32-bit.
2: ("QQQIHHH", "IIQQQQQQ", (0, 2, 5)), # 64-bit.
}[ident[4]]
except KeyError:
return None
else:
p_get = operator.itemgetter(*p_idx)
# Find the interpreter section and return its content.
try:
_, e_phoff, _, _, _, e_phentsize, e_phnum = _read_unpacked(f, e_fmt)
except struct.error:
return None
for i in range(e_phnum + 1):
f.seek(e_phoff + e_phentsize * i)
try:
p_type, p_offset, p_filesz = p_get(_read_unpacked(f, p_fmt))
except struct.error:
return None
if p_type != 3: # Not PT_INTERP.
continue
f.seek(p_offset)
interpreter = os.fsdecode(f.read(p_filesz)).strip("\0")
if "musl" not in interpreter:
return None
return interpreter
return None
class _MuslVersion(NamedTuple):
major: int
minor: int
def _parse_musl_version(output: str) -> Optional[_MuslVersion]:
lines = [n for n in (n.strip() for n in output.splitlines()) if n]
if len(lines) < 2 or lines[0][:4] != "musl":
return None
m = re.match(r"Version (\d+)\.(\d+)", lines[1])
if not m:
return None
return _MuslVersion(major=int(m.group(1)), minor=int(m.group(2)))
@functools.lru_cache()
def _get_musl_version(executable: str) -> Optional[_MuslVersion]:
"""Detect currently-running musl runtime version.
This is done by checking the specified executable's dynamic linking
information, and invoking the loader to parse its output for a version
string. If the loader is musl, the output would be something like::
musl libc (x86_64)
Version 1.2.2
Dynamic Program Loader
"""
with contextlib.ExitStack() as stack:
try:
f = stack.enter_context(open(executable, "rb"))
except IOError:
return None
ld = _parse_ld_musl_from_elf(f)
if not ld:
return None
proc = subprocess.run([ld], stderr=subprocess.PIPE, universal_newlines=True)
return _parse_musl_version(proc.stderr)
def platform_tags(arch: str) -> Iterator[str]:
"""Generate musllinux tags compatible to the current platform.
:param arch: Should be the part of platform tag after the ``linux_``
prefix, e.g. ``x86_64``. The ``linux_`` prefix is assumed as a
prerequisite for the current platform to be musllinux-compatible.
:returns: An iterator of compatible musllinux tags.
"""
sys_musl = _get_musl_version(sys.executable)
if sys_musl is None: # Python not dynamically linked against musl.
return
for minor in range(sys_musl.minor, -1, -1):
yield f"musllinux_{sys_musl.major}_{minor}_{arch}"
if __name__ == "__main__": # pragma: no cover
import sysconfig
plat = sysconfig.get_platform()
assert plat.startswith("linux-"), "not linux"
print("plat:", plat)
print("musl:", _get_musl_version(sys.executable))
print("tags:", end=" ")
for t in platform_tags(re.sub(r"[.-]", "_", plat.split("-", 1)[-1])):
print(t, end="\n ")

View file

@ -1,67 +0,0 @@
# This file is dual licensed under the terms of the Apache License, Version
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
# for complete details.
class InfinityType:
def __repr__(self) -> str:
return "Infinity"
def __hash__(self) -> int:
return hash(repr(self))
def __lt__(self, other: object) -> bool:
return False
def __le__(self, other: object) -> bool:
return False
def __eq__(self, other: object) -> bool:
return isinstance(other, self.__class__)
def __ne__(self, other: object) -> bool:
return not isinstance(other, self.__class__)
def __gt__(self, other: object) -> bool:
return True
def __ge__(self, other: object) -> bool:
return True
def __neg__(self: object) -> "NegativeInfinityType":
return NegativeInfinity
Infinity = InfinityType()
class NegativeInfinityType:
def __repr__(self) -> str:
return "-Infinity"
def __hash__(self) -> int:
return hash(repr(self))
def __lt__(self, other: object) -> bool:
return True
def __le__(self, other: object) -> bool:
return True
def __eq__(self, other: object) -> bool:
return isinstance(other, self.__class__)
def __ne__(self, other: object) -> bool:
return not isinstance(other, self.__class__)
def __gt__(self, other: object) -> bool:
return False
def __ge__(self, other: object) -> bool:
return False
def __neg__(self: object) -> InfinityType:
return Infinity
NegativeInfinity = NegativeInfinityType()

View file

@ -1,304 +0,0 @@
# This file is dual licensed under the terms of the Apache License, Version
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
# for complete details.
import operator
import os
import platform
import sys
from typing import Any, Callable, Dict, List, Optional, Tuple, Union
from pyparsing import ( # noqa: N817
Forward,
Group,
Literal as L,
ParseException,
ParseResults,
QuotedString,
ZeroOrMore,
stringEnd,
stringStart,
)
from .specifiers import InvalidSpecifier, Specifier
__all__ = [
"InvalidMarker",
"UndefinedComparison",
"UndefinedEnvironmentName",
"Marker",
"default_environment",
]
Operator = Callable[[str, str], bool]
class InvalidMarker(ValueError):
"""
An invalid marker was found, users should refer to PEP 508.
"""
class UndefinedComparison(ValueError):
"""
An invalid operation was attempted on a value that doesn't support it.
"""
class UndefinedEnvironmentName(ValueError):
"""
A name was attempted to be used that does not exist inside of the
environment.
"""
class Node:
def __init__(self, value: Any) -> None:
self.value = value
def __str__(self) -> str:
return str(self.value)
def __repr__(self) -> str:
return f"<{self.__class__.__name__}('{self}')>"
def serialize(self) -> str:
raise NotImplementedError
class Variable(Node):
def serialize(self) -> str:
return str(self)
class Value(Node):
def serialize(self) -> str:
return f'"{self}"'
class Op(Node):
def serialize(self) -> str:
return str(self)
VARIABLE = (
L("implementation_version")
| L("platform_python_implementation")
| L("implementation_name")
| L("python_full_version")
| L("platform_release")
| L("platform_version")
| L("platform_machine")
| L("platform_system")
| L("python_version")
| L("sys_platform")
| L("os_name")
| L("os.name") # PEP-345
| L("sys.platform") # PEP-345
| L("platform.version") # PEP-345
| L("platform.machine") # PEP-345
| L("platform.python_implementation") # PEP-345
| L("python_implementation") # undocumented setuptools legacy
| L("extra") # PEP-508
)
ALIASES = {
"os.name": "os_name",
"sys.platform": "sys_platform",
"platform.version": "platform_version",
"platform.machine": "platform_machine",
"platform.python_implementation": "platform_python_implementation",
"python_implementation": "platform_python_implementation",
}
VARIABLE.setParseAction(lambda s, l, t: Variable(ALIASES.get(t[0], t[0])))
VERSION_CMP = (
L("===") | L("==") | L(">=") | L("<=") | L("!=") | L("~=") | L(">") | L("<")
)
MARKER_OP = VERSION_CMP | L("not in") | L("in")
MARKER_OP.setParseAction(lambda s, l, t: Op(t[0]))
MARKER_VALUE = QuotedString("'") | QuotedString('"')
MARKER_VALUE.setParseAction(lambda s, l, t: Value(t[0]))
BOOLOP = L("and") | L("or")
MARKER_VAR = VARIABLE | MARKER_VALUE
MARKER_ITEM = Group(MARKER_VAR + MARKER_OP + MARKER_VAR)
MARKER_ITEM.setParseAction(lambda s, l, t: tuple(t[0]))
LPAREN = L("(").suppress()
RPAREN = L(")").suppress()
MARKER_EXPR = Forward()
MARKER_ATOM = MARKER_ITEM | Group(LPAREN + MARKER_EXPR + RPAREN)
MARKER_EXPR << MARKER_ATOM + ZeroOrMore(BOOLOP + MARKER_EXPR)
MARKER = stringStart + MARKER_EXPR + stringEnd
def _coerce_parse_result(results: Union[ParseResults, List[Any]]) -> List[Any]:
if isinstance(results, ParseResults):
return [_coerce_parse_result(i) for i in results]
else:
return results
def _format_marker(
marker: Union[List[str], Tuple[Node, ...], str], first: Optional[bool] = True
) -> str:
assert isinstance(marker, (list, tuple, str))
# Sometimes we have a structure like [[...]] which is a single item list
# where the single item is itself it's own list. In that case we want skip
# the rest of this function so that we don't get extraneous () on the
# outside.
if (
isinstance(marker, list)
and len(marker) == 1
and isinstance(marker[0], (list, tuple))
):
return _format_marker(marker[0])
if isinstance(marker, list):
inner = (_format_marker(m, first=False) for m in marker)
if first:
return " ".join(inner)
else:
return "(" + " ".join(inner) + ")"
elif isinstance(marker, tuple):
return " ".join([m.serialize() for m in marker])
else:
return marker
_operators: Dict[str, Operator] = {
"in": lambda lhs, rhs: lhs in rhs,
"not in": lambda lhs, rhs: lhs not in rhs,
"<": operator.lt,
"<=": operator.le,
"==": operator.eq,
"!=": operator.ne,
">=": operator.ge,
">": operator.gt,
}
def _eval_op(lhs: str, op: Op, rhs: str) -> bool:
try:
spec = Specifier("".join([op.serialize(), rhs]))
except InvalidSpecifier:
pass
else:
return spec.contains(lhs)
oper: Optional[Operator] = _operators.get(op.serialize())
if oper is None:
raise UndefinedComparison(f"Undefined {op!r} on {lhs!r} and {rhs!r}.")
return oper(lhs, rhs)
class Undefined:
pass
_undefined = Undefined()
def _get_env(environment: Dict[str, str], name: str) -> str:
value: Union[str, Undefined] = environment.get(name, _undefined)
if isinstance(value, Undefined):
raise UndefinedEnvironmentName(
f"{name!r} does not exist in evaluation environment."
)
return value
def _evaluate_markers(markers: List[Any], environment: Dict[str, str]) -> bool:
groups: List[List[bool]] = [[]]
for marker in markers:
assert isinstance(marker, (list, tuple, str))
if isinstance(marker, list):
groups[-1].append(_evaluate_markers(marker, environment))
elif isinstance(marker, tuple):
lhs, op, rhs = marker
if isinstance(lhs, Variable):
lhs_value = _get_env(environment, lhs.value)
rhs_value = rhs.value
else:
lhs_value = lhs.value
rhs_value = _get_env(environment, rhs.value)
groups[-1].append(_eval_op(lhs_value, op, rhs_value))
else:
assert marker in ["and", "or"]
if marker == "or":
groups.append([])
return any(all(item) for item in groups)
def format_full_version(info: "sys._version_info") -> str:
version = "{0.major}.{0.minor}.{0.micro}".format(info)
kind = info.releaselevel
if kind != "final":
version += kind[0] + str(info.serial)
return version
def default_environment() -> Dict[str, str]:
iver = format_full_version(sys.implementation.version)
implementation_name = sys.implementation.name
return {
"implementation_name": implementation_name,
"implementation_version": iver,
"os_name": os.name,
"platform_machine": platform.machine(),
"platform_release": platform.release(),
"platform_system": platform.system(),
"platform_version": platform.version(),
"python_full_version": platform.python_version(),
"platform_python_implementation": platform.python_implementation(),
"python_version": ".".join(platform.python_version_tuple()[:2]),
"sys_platform": sys.platform,
}
class Marker:
def __init__(self, marker: str) -> None:
try:
self._markers = _coerce_parse_result(MARKER.parseString(marker))
except ParseException as e:
raise InvalidMarker(
f"Invalid marker: {marker!r}, parse error at "
f"{marker[e.loc : e.loc + 8]!r}"
)
def __str__(self) -> str:
return _format_marker(self._markers)
def __repr__(self) -> str:
return f"<Marker('{self}')>"
def evaluate(self, environment: Optional[Dict[str, str]] = None) -> bool:
"""Evaluate a marker.
Return the boolean from evaluating the given marker against the
environment. environment is an optional argument to override all or
part of the determined environment.
The environment is determined from the current Python process.
"""
current_environment = default_environment()
if environment is not None:
current_environment.update(environment)
return _evaluate_markers(self._markers, current_environment)

View file

@ -1,146 +0,0 @@
# This file is dual licensed under the terms of the Apache License, Version
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
# for complete details.
import re
import string
import urllib.parse
from typing import List, Optional as TOptional, Set
from pyparsing import ( # noqa
Combine,
Literal as L,
Optional,
ParseException,
Regex,
Word,
ZeroOrMore,
originalTextFor,
stringEnd,
stringStart,
)
from .markers import MARKER_EXPR, Marker
from .specifiers import LegacySpecifier, Specifier, SpecifierSet
class InvalidRequirement(ValueError):
"""
An invalid requirement was found, users should refer to PEP 508.
"""
ALPHANUM = Word(string.ascii_letters + string.digits)
LBRACKET = L("[").suppress()
RBRACKET = L("]").suppress()
LPAREN = L("(").suppress()
RPAREN = L(")").suppress()
COMMA = L(",").suppress()
SEMICOLON = L(";").suppress()
AT = L("@").suppress()
PUNCTUATION = Word("-_.")
IDENTIFIER_END = ALPHANUM | (ZeroOrMore(PUNCTUATION) + ALPHANUM)
IDENTIFIER = Combine(ALPHANUM + ZeroOrMore(IDENTIFIER_END))
NAME = IDENTIFIER("name")
EXTRA = IDENTIFIER
URI = Regex(r"[^ ]+")("url")
URL = AT + URI
EXTRAS_LIST = EXTRA + ZeroOrMore(COMMA + EXTRA)
EXTRAS = (LBRACKET + Optional(EXTRAS_LIST) + RBRACKET)("extras")
VERSION_PEP440 = Regex(Specifier._regex_str, re.VERBOSE | re.IGNORECASE)
VERSION_LEGACY = Regex(LegacySpecifier._regex_str, re.VERBOSE | re.IGNORECASE)
VERSION_ONE = VERSION_PEP440 ^ VERSION_LEGACY
VERSION_MANY = Combine(
VERSION_ONE + ZeroOrMore(COMMA + VERSION_ONE), joinString=",", adjacent=False
)("_raw_spec")
_VERSION_SPEC = Optional((LPAREN + VERSION_MANY + RPAREN) | VERSION_MANY)
_VERSION_SPEC.setParseAction(lambda s, l, t: t._raw_spec or "")
VERSION_SPEC = originalTextFor(_VERSION_SPEC)("specifier")
VERSION_SPEC.setParseAction(lambda s, l, t: t[1])
MARKER_EXPR = originalTextFor(MARKER_EXPR())("marker")
MARKER_EXPR.setParseAction(
lambda s, l, t: Marker(s[t._original_start : t._original_end])
)
MARKER_SEPARATOR = SEMICOLON
MARKER = MARKER_SEPARATOR + MARKER_EXPR
VERSION_AND_MARKER = VERSION_SPEC + Optional(MARKER)
URL_AND_MARKER = URL + Optional(MARKER)
NAMED_REQUIREMENT = NAME + Optional(EXTRAS) + (URL_AND_MARKER | VERSION_AND_MARKER)
REQUIREMENT = stringStart + NAMED_REQUIREMENT + stringEnd
# pyparsing isn't thread safe during initialization, so we do it eagerly, see
# issue #104
REQUIREMENT.parseString("x[]")
class Requirement:
"""Parse a requirement.
Parse a given requirement string into its parts, such as name, specifier,
URL, and extras. Raises InvalidRequirement on a badly-formed requirement
string.
"""
# TODO: Can we test whether something is contained within a requirement?
# If so how do we do that? Do we need to test against the _name_ of
# the thing as well as the version? What about the markers?
# TODO: Can we normalize the name and extra name?
def __init__(self, requirement_string: str) -> None:
try:
req = REQUIREMENT.parseString(requirement_string)
except ParseException as e:
raise InvalidRequirement(
f'Parse error at "{ requirement_string[e.loc : e.loc + 8]!r}": {e.msg}'
)
self.name: str = req.name
if req.url:
parsed_url = urllib.parse.urlparse(req.url)
if parsed_url.scheme == "file":
if urllib.parse.urlunparse(parsed_url) != req.url:
raise InvalidRequirement("Invalid URL given")
elif not (parsed_url.scheme and parsed_url.netloc) or (
not parsed_url.scheme and not parsed_url.netloc
):
raise InvalidRequirement(f"Invalid URL: {req.url}")
self.url: TOptional[str] = req.url
else:
self.url = None
self.extras: Set[str] = set(req.extras.asList() if req.extras else [])
self.specifier: SpecifierSet = SpecifierSet(req.specifier)
self.marker: TOptional[Marker] = req.marker if req.marker else None
def __str__(self) -> str:
parts: List[str] = [self.name]
if self.extras:
formatted_extras = ",".join(sorted(self.extras))
parts.append(f"[{formatted_extras}]")
if self.specifier:
parts.append(str(self.specifier))
if self.url:
parts.append(f"@ {self.url}")
if self.marker:
parts.append(" ")
if self.marker:
parts.append(f"; {self.marker}")
return "".join(parts)
def __repr__(self) -> str:
return f"<Requirement('{self}')>"

View file

@ -1,828 +0,0 @@
# This file is dual licensed under the terms of the Apache License, Version
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
# for complete details.
import abc
import functools
import itertools
import re
import warnings
from typing import (
Callable,
Dict,
Iterable,
Iterator,
List,
Optional,
Pattern,
Set,
Tuple,
TypeVar,
Union,
)
from .utils import canonicalize_version
from .version import LegacyVersion, Version, parse
ParsedVersion = Union[Version, LegacyVersion]
UnparsedVersion = Union[Version, LegacyVersion, str]
VersionTypeVar = TypeVar("VersionTypeVar", bound=UnparsedVersion)
CallableOperator = Callable[[ParsedVersion, str], bool]
class InvalidSpecifier(ValueError):
"""
An invalid specifier was found, users should refer to PEP 440.
"""
class BaseSpecifier(metaclass=abc.ABCMeta):
@abc.abstractmethod
def __str__(self) -> str:
"""
Returns the str representation of this Specifier like object. This
should be representative of the Specifier itself.
"""
@abc.abstractmethod
def __hash__(self) -> int:
"""
Returns a hash value for this Specifier like object.
"""
@abc.abstractmethod
def __eq__(self, other: object) -> bool:
"""
Returns a boolean representing whether or not the two Specifier like
objects are equal.
"""
@abc.abstractmethod
def __ne__(self, other: object) -> bool:
"""
Returns a boolean representing whether or not the two Specifier like
objects are not equal.
"""
@abc.abstractproperty
def prereleases(self) -> Optional[bool]:
"""
Returns whether or not pre-releases as a whole are allowed by this
specifier.
"""
@prereleases.setter
def prereleases(self, value: bool) -> None:
"""
Sets whether or not pre-releases as a whole are allowed by this
specifier.
"""
@abc.abstractmethod
def contains(self, item: str, prereleases: Optional[bool] = None) -> bool:
"""
Determines if the given item is contained within this specifier.
"""
@abc.abstractmethod
def filter(
self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None
) -> Iterable[VersionTypeVar]:
"""
Takes an iterable of items and filters them so that only items which
are contained within this specifier are allowed in it.
"""
class _IndividualSpecifier(BaseSpecifier):
_operators: Dict[str, str] = {}
_regex: Pattern[str]
def __init__(self, spec: str = "", prereleases: Optional[bool] = None) -> None:
match = self._regex.search(spec)
if not match:
raise InvalidSpecifier(f"Invalid specifier: '{spec}'")
self._spec: Tuple[str, str] = (
match.group("operator").strip(),
match.group("version").strip(),
)
# Store whether or not this Specifier should accept prereleases
self._prereleases = prereleases
def __repr__(self) -> str:
pre = (
f", prereleases={self.prereleases!r}"
if self._prereleases is not None
else ""
)
return "<{}({!r}{})>".format(self.__class__.__name__, str(self), pre)
def __str__(self) -> str:
return "{}{}".format(*self._spec)
@property
def _canonical_spec(self) -> Tuple[str, str]:
return self._spec[0], canonicalize_version(self._spec[1])
def __hash__(self) -> int:
return hash(self._canonical_spec)
def __eq__(self, other: object) -> bool:
if isinstance(other, str):
try:
other = self.__class__(str(other))
except InvalidSpecifier:
return NotImplemented
elif not isinstance(other, self.__class__):
return NotImplemented
return self._canonical_spec == other._canonical_spec
def __ne__(self, other: object) -> bool:
if isinstance(other, str):
try:
other = self.__class__(str(other))
except InvalidSpecifier:
return NotImplemented
elif not isinstance(other, self.__class__):
return NotImplemented
return self._spec != other._spec
def _get_operator(self, op: str) -> CallableOperator:
operator_callable: CallableOperator = getattr(
self, f"_compare_{self._operators[op]}"
)
return operator_callable
def _coerce_version(self, version: UnparsedVersion) -> ParsedVersion:
if not isinstance(version, (LegacyVersion, Version)):
version = parse(version)
return version
@property
def operator(self) -> str:
return self._spec[0]
@property
def version(self) -> str:
return self._spec[1]
@property
def prereleases(self) -> Optional[bool]:
return self._prereleases
@prereleases.setter
def prereleases(self, value: bool) -> None:
self._prereleases = value
def __contains__(self, item: str) -> bool:
return self.contains(item)
def contains(
self, item: UnparsedVersion, prereleases: Optional[bool] = None
) -> bool:
# Determine if prereleases are to be allowed or not.
if prereleases is None:
prereleases = self.prereleases
# Normalize item to a Version or LegacyVersion, this allows us to have
# a shortcut for ``"2.0" in Specifier(">=2")
normalized_item = self._coerce_version(item)
# Determine if we should be supporting prereleases in this specifier
# or not, if we do not support prereleases than we can short circuit
# logic if this version is a prereleases.
if normalized_item.is_prerelease and not prereleases:
return False
# Actually do the comparison to determine if this item is contained
# within this Specifier or not.
operator_callable: CallableOperator = self._get_operator(self.operator)
return operator_callable(normalized_item, self.version)
def filter(
self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None
) -> Iterable[VersionTypeVar]:
yielded = False
found_prereleases = []
kw = {"prereleases": prereleases if prereleases is not None else True}
# Attempt to iterate over all the values in the iterable and if any of
# them match, yield them.
for version in iterable:
parsed_version = self._coerce_version(version)
if self.contains(parsed_version, **kw):
# If our version is a prerelease, and we were not set to allow
# prereleases, then we'll store it for later in case nothing
# else matches this specifier.
if parsed_version.is_prerelease and not (
prereleases or self.prereleases
):
found_prereleases.append(version)
# Either this is not a prerelease, or we should have been
# accepting prereleases from the beginning.
else:
yielded = True
yield version
# Now that we've iterated over everything, determine if we've yielded
# any values, and if we have not and we have any prereleases stored up
# then we will go ahead and yield the prereleases.
if not yielded and found_prereleases:
for version in found_prereleases:
yield version
class LegacySpecifier(_IndividualSpecifier):
_regex_str = r"""
(?P<operator>(==|!=|<=|>=|<|>))
\s*
(?P<version>
[^,;\s)]* # Since this is a "legacy" specifier, and the version
# string can be just about anything, we match everything
# except for whitespace, a semi-colon for marker support,
# a closing paren since versions can be enclosed in
# them, and a comma since it's a version separator.
)
"""
_regex = re.compile(r"^\s*" + _regex_str + r"\s*$", re.VERBOSE | re.IGNORECASE)
_operators = {
"==": "equal",
"!=": "not_equal",
"<=": "less_than_equal",
">=": "greater_than_equal",
"<": "less_than",
">": "greater_than",
}
def __init__(self, spec: str = "", prereleases: Optional[bool] = None) -> None:
super().__init__(spec, prereleases)
warnings.warn(
"Creating a LegacyVersion has been deprecated and will be "
"removed in the next major release",
DeprecationWarning,
)
def _coerce_version(self, version: UnparsedVersion) -> LegacyVersion:
if not isinstance(version, LegacyVersion):
version = LegacyVersion(str(version))
return version
def _compare_equal(self, prospective: LegacyVersion, spec: str) -> bool:
return prospective == self._coerce_version(spec)
def _compare_not_equal(self, prospective: LegacyVersion, spec: str) -> bool:
return prospective != self._coerce_version(spec)
def _compare_less_than_equal(self, prospective: LegacyVersion, spec: str) -> bool:
return prospective <= self._coerce_version(spec)
def _compare_greater_than_equal(
self, prospective: LegacyVersion, spec: str
) -> bool:
return prospective >= self._coerce_version(spec)
def _compare_less_than(self, prospective: LegacyVersion, spec: str) -> bool:
return prospective < self._coerce_version(spec)
def _compare_greater_than(self, prospective: LegacyVersion, spec: str) -> bool:
return prospective > self._coerce_version(spec)
def _require_version_compare(
fn: Callable[["Specifier", ParsedVersion, str], bool]
) -> Callable[["Specifier", ParsedVersion, str], bool]:
@functools.wraps(fn)
def wrapped(self: "Specifier", prospective: ParsedVersion, spec: str) -> bool:
if not isinstance(prospective, Version):
return False
return fn(self, prospective, spec)
return wrapped
class Specifier(_IndividualSpecifier):
_regex_str = r"""
(?P<operator>(~=|==|!=|<=|>=|<|>|===))
(?P<version>
(?:
# The identity operators allow for an escape hatch that will
# do an exact string match of the version you wish to install.
# This will not be parsed by PEP 440 and we cannot determine
# any semantic meaning from it. This operator is discouraged
# but included entirely as an escape hatch.
(?<====) # Only match for the identity operator
\s*
[^\s]* # We just match everything, except for whitespace
# since we are only testing for strict identity.
)
|
(?:
# The (non)equality operators allow for wild card and local
# versions to be specified so we have to define these two
# operators separately to enable that.
(?<===|!=) # Only match for equals and not equals
\s*
v?
(?:[0-9]+!)? # epoch
[0-9]+(?:\.[0-9]+)* # release
(?: # pre release
[-_\.]?
(a|b|c|rc|alpha|beta|pre|preview)
[-_\.]?
[0-9]*
)?
(?: # post release
(?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*)
)?
# You cannot use a wild card and a dev or local version
# together so group them with a | and make them optional.
(?:
(?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release
(?:\+[a-z0-9]+(?:[-_\.][a-z0-9]+)*)? # local
|
\.\* # Wild card syntax of .*
)?
)
|
(?:
# The compatible operator requires at least two digits in the
# release segment.
(?<=~=) # Only match for the compatible operator
\s*
v?
(?:[0-9]+!)? # epoch
[0-9]+(?:\.[0-9]+)+ # release (We have a + instead of a *)
(?: # pre release
[-_\.]?
(a|b|c|rc|alpha|beta|pre|preview)
[-_\.]?
[0-9]*
)?
(?: # post release
(?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*)
)?
(?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release
)
|
(?:
# All other operators only allow a sub set of what the
# (non)equality operators do. Specifically they do not allow
# local versions to be specified nor do they allow the prefix
# matching wild cards.
(?<!==|!=|~=) # We have special cases for these
# operators so we want to make sure they
# don't match here.
\s*
v?
(?:[0-9]+!)? # epoch
[0-9]+(?:\.[0-9]+)* # release
(?: # pre release
[-_\.]?
(a|b|c|rc|alpha|beta|pre|preview)
[-_\.]?
[0-9]*
)?
(?: # post release
(?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*)
)?
(?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release
)
)
"""
_regex = re.compile(r"^\s*" + _regex_str + r"\s*$", re.VERBOSE | re.IGNORECASE)
_operators = {
"~=": "compatible",
"==": "equal",
"!=": "not_equal",
"<=": "less_than_equal",
">=": "greater_than_equal",
"<": "less_than",
">": "greater_than",
"===": "arbitrary",
}
@_require_version_compare
def _compare_compatible(self, prospective: ParsedVersion, spec: str) -> bool:
# Compatible releases have an equivalent combination of >= and ==. That
# is that ~=2.2 is equivalent to >=2.2,==2.*. This allows us to
# implement this in terms of the other specifiers instead of
# implementing it ourselves. The only thing we need to do is construct
# the other specifiers.
# We want everything but the last item in the version, but we want to
# ignore suffix segments.
prefix = ".".join(
list(itertools.takewhile(_is_not_suffix, _version_split(spec)))[:-1]
)
# Add the prefix notation to the end of our string
prefix += ".*"
return self._get_operator(">=")(prospective, spec) and self._get_operator("==")(
prospective, prefix
)
@_require_version_compare
def _compare_equal(self, prospective: ParsedVersion, spec: str) -> bool:
# We need special logic to handle prefix matching
if spec.endswith(".*"):
# In the case of prefix matching we want to ignore local segment.
prospective = Version(prospective.public)
# Split the spec out by dots, and pretend that there is an implicit
# dot in between a release segment and a pre-release segment.
split_spec = _version_split(spec[:-2]) # Remove the trailing .*
# Split the prospective version out by dots, and pretend that there
# is an implicit dot in between a release segment and a pre-release
# segment.
split_prospective = _version_split(str(prospective))
# Shorten the prospective version to be the same length as the spec
# so that we can determine if the specifier is a prefix of the
# prospective version or not.
shortened_prospective = split_prospective[: len(split_spec)]
# Pad out our two sides with zeros so that they both equal the same
# length.
padded_spec, padded_prospective = _pad_version(
split_spec, shortened_prospective
)
return padded_prospective == padded_spec
else:
# Convert our spec string into a Version
spec_version = Version(spec)
# If the specifier does not have a local segment, then we want to
# act as if the prospective version also does not have a local
# segment.
if not spec_version.local:
prospective = Version(prospective.public)
return prospective == spec_version
@_require_version_compare
def _compare_not_equal(self, prospective: ParsedVersion, spec: str) -> bool:
return not self._compare_equal(prospective, spec)
@_require_version_compare
def _compare_less_than_equal(self, prospective: ParsedVersion, spec: str) -> bool:
# NB: Local version identifiers are NOT permitted in the version
# specifier, so local version labels can be universally removed from
# the prospective version.
return Version(prospective.public) <= Version(spec)
@_require_version_compare
def _compare_greater_than_equal(
self, prospective: ParsedVersion, spec: str
) -> bool:
# NB: Local version identifiers are NOT permitted in the version
# specifier, so local version labels can be universally removed from
# the prospective version.
return Version(prospective.public) >= Version(spec)
@_require_version_compare
def _compare_less_than(self, prospective: ParsedVersion, spec_str: str) -> bool:
# Convert our spec to a Version instance, since we'll want to work with
# it as a version.
spec = Version(spec_str)
# Check to see if the prospective version is less than the spec
# version. If it's not we can short circuit and just return False now
# instead of doing extra unneeded work.
if not prospective < spec:
return False
# This special case is here so that, unless the specifier itself
# includes is a pre-release version, that we do not accept pre-release
# versions for the version mentioned in the specifier (e.g. <3.1 should
# not match 3.1.dev0, but should match 3.0.dev0).
if not spec.is_prerelease and prospective.is_prerelease:
if Version(prospective.base_version) == Version(spec.base_version):
return False
# If we've gotten to here, it means that prospective version is both
# less than the spec version *and* it's not a pre-release of the same
# version in the spec.
return True
@_require_version_compare
def _compare_greater_than(self, prospective: ParsedVersion, spec_str: str) -> bool:
# Convert our spec to a Version instance, since we'll want to work with
# it as a version.
spec = Version(spec_str)
# Check to see if the prospective version is greater than the spec
# version. If it's not we can short circuit and just return False now
# instead of doing extra unneeded work.
if not prospective > spec:
return False
# This special case is here so that, unless the specifier itself
# includes is a post-release version, that we do not accept
# post-release versions for the version mentioned in the specifier
# (e.g. >3.1 should not match 3.0.post0, but should match 3.2.post0).
if not spec.is_postrelease and prospective.is_postrelease:
if Version(prospective.base_version) == Version(spec.base_version):
return False
# Ensure that we do not allow a local version of the version mentioned
# in the specifier, which is technically greater than, to match.
if prospective.local is not None:
if Version(prospective.base_version) == Version(spec.base_version):
return False
# If we've gotten to here, it means that prospective version is both
# greater than the spec version *and* it's not a pre-release of the
# same version in the spec.
return True
def _compare_arbitrary(self, prospective: Version, spec: str) -> bool:
return str(prospective).lower() == str(spec).lower()
@property
def prereleases(self) -> bool:
# If there is an explicit prereleases set for this, then we'll just
# blindly use that.
if self._prereleases is not None:
return self._prereleases
# Look at all of our specifiers and determine if they are inclusive
# operators, and if they are if they are including an explicit
# prerelease.
operator, version = self._spec
if operator in ["==", ">=", "<=", "~=", "==="]:
# The == specifier can include a trailing .*, if it does we
# want to remove before parsing.
if operator == "==" and version.endswith(".*"):
version = version[:-2]
# Parse the version, and if it is a pre-release than this
# specifier allows pre-releases.
if parse(version).is_prerelease:
return True
return False
@prereleases.setter
def prereleases(self, value: bool) -> None:
self._prereleases = value
_prefix_regex = re.compile(r"^([0-9]+)((?:a|b|c|rc)[0-9]+)$")
def _version_split(version: str) -> List[str]:
result: List[str] = []
for item in version.split("."):
match = _prefix_regex.search(item)
if match:
result.extend(match.groups())
else:
result.append(item)
return result
def _is_not_suffix(segment: str) -> bool:
return not any(
segment.startswith(prefix) for prefix in ("dev", "a", "b", "rc", "post")
)
def _pad_version(left: List[str], right: List[str]) -> Tuple[List[str], List[str]]:
left_split, right_split = [], []
# Get the release segment of our versions
left_split.append(list(itertools.takewhile(lambda x: x.isdigit(), left)))
right_split.append(list(itertools.takewhile(lambda x: x.isdigit(), right)))
# Get the rest of our versions
left_split.append(left[len(left_split[0]) :])
right_split.append(right[len(right_split[0]) :])
# Insert our padding
left_split.insert(1, ["0"] * max(0, len(right_split[0]) - len(left_split[0])))
right_split.insert(1, ["0"] * max(0, len(left_split[0]) - len(right_split[0])))
return (list(itertools.chain(*left_split)), list(itertools.chain(*right_split)))
class SpecifierSet(BaseSpecifier):
def __init__(
self, specifiers: str = "", prereleases: Optional[bool] = None
) -> None:
# Split on , to break each individual specifier into it's own item, and
# strip each item to remove leading/trailing whitespace.
split_specifiers = [s.strip() for s in specifiers.split(",") if s.strip()]
# Parsed each individual specifier, attempting first to make it a
# Specifier and falling back to a LegacySpecifier.
parsed: Set[_IndividualSpecifier] = set()
for specifier in split_specifiers:
try:
parsed.add(Specifier(specifier))
except InvalidSpecifier:
parsed.add(LegacySpecifier(specifier))
# Turn our parsed specifiers into a frozen set and save them for later.
self._specs = frozenset(parsed)
# Store our prereleases value so we can use it later to determine if
# we accept prereleases or not.
self._prereleases = prereleases
def __repr__(self) -> str:
pre = (
f", prereleases={self.prereleases!r}"
if self._prereleases is not None
else ""
)
return "<SpecifierSet({!r}{})>".format(str(self), pre)
def __str__(self) -> str:
return ",".join(sorted(str(s) for s in self._specs))
def __hash__(self) -> int:
return hash(self._specs)
def __and__(self, other: Union["SpecifierSet", str]) -> "SpecifierSet":
if isinstance(other, str):
other = SpecifierSet(other)
elif not isinstance(other, SpecifierSet):
return NotImplemented
specifier = SpecifierSet()
specifier._specs = frozenset(self._specs | other._specs)
if self._prereleases is None and other._prereleases is not None:
specifier._prereleases = other._prereleases
elif self._prereleases is not None and other._prereleases is None:
specifier._prereleases = self._prereleases
elif self._prereleases == other._prereleases:
specifier._prereleases = self._prereleases
else:
raise ValueError(
"Cannot combine SpecifierSets with True and False prerelease "
"overrides."
)
return specifier
def __eq__(self, other: object) -> bool:
if isinstance(other, (str, _IndividualSpecifier)):
other = SpecifierSet(str(other))
elif not isinstance(other, SpecifierSet):
return NotImplemented
return self._specs == other._specs
def __ne__(self, other: object) -> bool:
if isinstance(other, (str, _IndividualSpecifier)):
other = SpecifierSet(str(other))
elif not isinstance(other, SpecifierSet):
return NotImplemented
return self._specs != other._specs
def __len__(self) -> int:
return len(self._specs)
def __iter__(self) -> Iterator[_IndividualSpecifier]:
return iter(self._specs)
@property
def prereleases(self) -> Optional[bool]:
# If we have been given an explicit prerelease modifier, then we'll
# pass that through here.
if self._prereleases is not None:
return self._prereleases
# If we don't have any specifiers, and we don't have a forced value,
# then we'll just return None since we don't know if this should have
# pre-releases or not.
if not self._specs:
return None
# Otherwise we'll see if any of the given specifiers accept
# prereleases, if any of them do we'll return True, otherwise False.
return any(s.prereleases for s in self._specs)
@prereleases.setter
def prereleases(self, value: bool) -> None:
self._prereleases = value
def __contains__(self, item: UnparsedVersion) -> bool:
return self.contains(item)
def contains(
self, item: UnparsedVersion, prereleases: Optional[bool] = None
) -> bool:
# Ensure that our item is a Version or LegacyVersion instance.
if not isinstance(item, (LegacyVersion, Version)):
item = parse(item)
# Determine if we're forcing a prerelease or not, if we're not forcing
# one for this particular filter call, then we'll use whatever the
# SpecifierSet thinks for whether or not we should support prereleases.
if prereleases is None:
prereleases = self.prereleases
# We can determine if we're going to allow pre-releases by looking to
# see if any of the underlying items supports them. If none of them do
# and this item is a pre-release then we do not allow it and we can
# short circuit that here.
# Note: This means that 1.0.dev1 would not be contained in something
# like >=1.0.devabc however it would be in >=1.0.debabc,>0.0.dev0
if not prereleases and item.is_prerelease:
return False
# We simply dispatch to the underlying specs here to make sure that the
# given version is contained within all of them.
# Note: This use of all() here means that an empty set of specifiers
# will always return True, this is an explicit design decision.
return all(s.contains(item, prereleases=prereleases) for s in self._specs)
def filter(
self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None
) -> Iterable[VersionTypeVar]:
# Determine if we're forcing a prerelease or not, if we're not forcing
# one for this particular filter call, then we'll use whatever the
# SpecifierSet thinks for whether or not we should support prereleases.
if prereleases is None:
prereleases = self.prereleases
# If we have any specifiers, then we want to wrap our iterable in the
# filter method for each one, this will act as a logical AND amongst
# each specifier.
if self._specs:
for spec in self._specs:
iterable = spec.filter(iterable, prereleases=bool(prereleases))
return iterable
# If we do not have any specifiers, then we need to have a rough filter
# which will filter out any pre-releases, unless there are no final
# releases, and which will filter out LegacyVersion in general.
else:
filtered: List[VersionTypeVar] = []
found_prereleases: List[VersionTypeVar] = []
item: UnparsedVersion
parsed_version: Union[Version, LegacyVersion]
for item in iterable:
# Ensure that we some kind of Version class for this item.
if not isinstance(item, (LegacyVersion, Version)):
parsed_version = parse(item)
else:
parsed_version = item
# Filter out any item which is parsed as a LegacyVersion
if isinstance(parsed_version, LegacyVersion):
continue
# Store any item which is a pre-release for later unless we've
# already found a final version or we are accepting prereleases
if parsed_version.is_prerelease and not prereleases:
if not filtered:
found_prereleases.append(item)
else:
filtered.append(item)
# If we've found no items except for pre-releases, then we'll go
# ahead and use the pre-releases
if not filtered and found_prereleases and prereleases is None:
return found_prereleases
return filtered

View file

@ -1,484 +0,0 @@
# This file is dual licensed under the terms of the Apache License, Version
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
# for complete details.
import logging
import platform
import sys
import sysconfig
from importlib.machinery import EXTENSION_SUFFIXES
from typing import (
Dict,
FrozenSet,
Iterable,
Iterator,
List,
Optional,
Sequence,
Tuple,
Union,
cast,
)
from . import _manylinux, _musllinux
logger = logging.getLogger(__name__)
PythonVersion = Sequence[int]
MacVersion = Tuple[int, int]
INTERPRETER_SHORT_NAMES: Dict[str, str] = {
"python": "py", # Generic.
"cpython": "cp",
"pypy": "pp",
"ironpython": "ip",
"jython": "jy",
}
_32_BIT_INTERPRETER = sys.maxsize <= 2 ** 32
class Tag:
"""
A representation of the tag triple for a wheel.
Instances are considered immutable and thus are hashable. Equality checking
is also supported.
"""
__slots__ = ["_interpreter", "_abi", "_platform", "_hash"]
def __init__(self, interpreter: str, abi: str, platform: str) -> None:
self._interpreter = interpreter.lower()
self._abi = abi.lower()
self._platform = platform.lower()
# The __hash__ of every single element in a Set[Tag] will be evaluated each time
# that a set calls its `.disjoint()` method, which may be called hundreds of
# times when scanning a page of links for packages with tags matching that
# Set[Tag]. Pre-computing the value here produces significant speedups for
# downstream consumers.
self._hash = hash((self._interpreter, self._abi, self._platform))
@property
def interpreter(self) -> str:
return self._interpreter
@property
def abi(self) -> str:
return self._abi
@property
def platform(self) -> str:
return self._platform
def __eq__(self, other: object) -> bool:
if not isinstance(other, Tag):
return NotImplemented
return (
(self._hash == other._hash) # Short-circuit ASAP for perf reasons.
and (self._platform == other._platform)
and (self._abi == other._abi)
and (self._interpreter == other._interpreter)
)
def __hash__(self) -> int:
return self._hash
def __str__(self) -> str:
return f"{self._interpreter}-{self._abi}-{self._platform}"
def __repr__(self) -> str:
return "<{self} @ {self_id}>".format(self=self, self_id=id(self))
def parse_tag(tag: str) -> FrozenSet[Tag]:
"""
Parses the provided tag (e.g. `py3-none-any`) into a frozenset of Tag instances.
Returning a set is required due to the possibility that the tag is a
compressed tag set.
"""
tags = set()
interpreters, abis, platforms = tag.split("-")
for interpreter in interpreters.split("."):
for abi in abis.split("."):
for platform_ in platforms.split("."):
tags.add(Tag(interpreter, abi, platform_))
return frozenset(tags)
def _get_config_var(name: str, warn: bool = False) -> Union[int, str, None]:
value = sysconfig.get_config_var(name)
if value is None and warn:
logger.debug(
"Config variable '%s' is unset, Python ABI tag may be incorrect", name
)
return value
def _normalize_string(string: str) -> str:
return string.replace(".", "_").replace("-", "_")
def _abi3_applies(python_version: PythonVersion) -> bool:
"""
Determine if the Python version supports abi3.
PEP 384 was first implemented in Python 3.2.
"""
return len(python_version) > 1 and tuple(python_version) >= (3, 2)
def _cpython_abis(py_version: PythonVersion, warn: bool = False) -> List[str]:
py_version = tuple(py_version) # To allow for version comparison.
abis = []
version = _version_nodot(py_version[:2])
debug = pymalloc = ucs4 = ""
with_debug = _get_config_var("Py_DEBUG", warn)
has_refcount = hasattr(sys, "gettotalrefcount")
# Windows doesn't set Py_DEBUG, so checking for support of debug-compiled
# extension modules is the best option.
# https://github.com/pypa/pip/issues/3383#issuecomment-173267692
has_ext = "_d.pyd" in EXTENSION_SUFFIXES
if with_debug or (with_debug is None and (has_refcount or has_ext)):
debug = "d"
if py_version < (3, 8):
with_pymalloc = _get_config_var("WITH_PYMALLOC", warn)
if with_pymalloc or with_pymalloc is None:
pymalloc = "m"
if py_version < (3, 3):
unicode_size = _get_config_var("Py_UNICODE_SIZE", warn)
if unicode_size == 4 or (
unicode_size is None and sys.maxunicode == 0x10FFFF
):
ucs4 = "u"
elif debug:
# Debug builds can also load "normal" extension modules.
# We can also assume no UCS-4 or pymalloc requirement.
abis.append(f"cp{version}")
abis.insert(
0,
"cp{version}{debug}{pymalloc}{ucs4}".format(
version=version, debug=debug, pymalloc=pymalloc, ucs4=ucs4
),
)
return abis
def cpython_tags(
python_version: Optional[PythonVersion] = None,
abis: Optional[Iterable[str]] = None,
platforms: Optional[Iterable[str]] = None,
*,
warn: bool = False,
) -> Iterator[Tag]:
"""
Yields the tags for a CPython interpreter.
The tags consist of:
- cp<python_version>-<abi>-<platform>
- cp<python_version>-abi3-<platform>
- cp<python_version>-none-<platform>
- cp<less than python_version>-abi3-<platform> # Older Python versions down to 3.2.
If python_version only specifies a major version then user-provided ABIs and
the 'none' ABItag will be used.
If 'abi3' or 'none' are specified in 'abis' then they will be yielded at
their normal position and not at the beginning.
"""
if not python_version:
python_version = sys.version_info[:2]
interpreter = "cp{}".format(_version_nodot(python_version[:2]))
if abis is None:
if len(python_version) > 1:
abis = _cpython_abis(python_version, warn)
else:
abis = []
abis = list(abis)
# 'abi3' and 'none' are explicitly handled later.
for explicit_abi in ("abi3", "none"):
try:
abis.remove(explicit_abi)
except ValueError:
pass
platforms = list(platforms or _platform_tags())
for abi in abis:
for platform_ in platforms:
yield Tag(interpreter, abi, platform_)
if _abi3_applies(python_version):
yield from (Tag(interpreter, "abi3", platform_) for platform_ in platforms)
yield from (Tag(interpreter, "none", platform_) for platform_ in platforms)
if _abi3_applies(python_version):
for minor_version in range(python_version[1] - 1, 1, -1):
for platform_ in platforms:
interpreter = "cp{version}".format(
version=_version_nodot((python_version[0], minor_version))
)
yield Tag(interpreter, "abi3", platform_)
def _generic_abi() -> Iterator[str]:
abi = sysconfig.get_config_var("SOABI")
if abi:
yield _normalize_string(abi)
def generic_tags(
interpreter: Optional[str] = None,
abis: Optional[Iterable[str]] = None,
platforms: Optional[Iterable[str]] = None,
*,
warn: bool = False,
) -> Iterator[Tag]:
"""
Yields the tags for a generic interpreter.
The tags consist of:
- <interpreter>-<abi>-<platform>
The "none" ABI will be added if it was not explicitly provided.
"""
if not interpreter:
interp_name = interpreter_name()
interp_version = interpreter_version(warn=warn)
interpreter = "".join([interp_name, interp_version])
if abis is None:
abis = _generic_abi()
platforms = list(platforms or _platform_tags())
abis = list(abis)
if "none" not in abis:
abis.append("none")
for abi in abis:
for platform_ in platforms:
yield Tag(interpreter, abi, platform_)
def _py_interpreter_range(py_version: PythonVersion) -> Iterator[str]:
"""
Yields Python versions in descending order.
After the latest version, the major-only version will be yielded, and then
all previous versions of that major version.
"""
if len(py_version) > 1:
yield "py{version}".format(version=_version_nodot(py_version[:2]))
yield "py{major}".format(major=py_version[0])
if len(py_version) > 1:
for minor in range(py_version[1] - 1, -1, -1):
yield "py{version}".format(version=_version_nodot((py_version[0], minor)))
def compatible_tags(
python_version: Optional[PythonVersion] = None,
interpreter: Optional[str] = None,
platforms: Optional[Iterable[str]] = None,
) -> Iterator[Tag]:
"""
Yields the sequence of tags that are compatible with a specific version of Python.
The tags consist of:
- py*-none-<platform>
- <interpreter>-none-any # ... if `interpreter` is provided.
- py*-none-any
"""
if not python_version:
python_version = sys.version_info[:2]
platforms = list(platforms or _platform_tags())
for version in _py_interpreter_range(python_version):
for platform_ in platforms:
yield Tag(version, "none", platform_)
if interpreter:
yield Tag(interpreter, "none", "any")
for version in _py_interpreter_range(python_version):
yield Tag(version, "none", "any")
def _mac_arch(arch: str, is_32bit: bool = _32_BIT_INTERPRETER) -> str:
if not is_32bit:
return arch
if arch.startswith("ppc"):
return "ppc"
return "i386"
def _mac_binary_formats(version: MacVersion, cpu_arch: str) -> List[str]:
formats = [cpu_arch]
if cpu_arch == "x86_64":
if version < (10, 4):
return []
formats.extend(["intel", "fat64", "fat32"])
elif cpu_arch == "i386":
if version < (10, 4):
return []
formats.extend(["intel", "fat32", "fat"])
elif cpu_arch == "ppc64":
# TODO: Need to care about 32-bit PPC for ppc64 through 10.2?
if version > (10, 5) or version < (10, 4):
return []
formats.append("fat64")
elif cpu_arch == "ppc":
if version > (10, 6):
return []
formats.extend(["fat32", "fat"])
if cpu_arch in {"arm64", "x86_64"}:
formats.append("universal2")
if cpu_arch in {"x86_64", "i386", "ppc64", "ppc", "intel"}:
formats.append("universal")
return formats
def mac_platforms(
version: Optional[MacVersion] = None, arch: Optional[str] = None
) -> Iterator[str]:
"""
Yields the platform tags for a macOS system.
The `version` parameter is a two-item tuple specifying the macOS version to
generate platform tags for. The `arch` parameter is the CPU architecture to
generate platform tags for. Both parameters default to the appropriate value
for the current system.
"""
version_str, _, cpu_arch = platform.mac_ver()
if version is None:
version = cast("MacVersion", tuple(map(int, version_str.split(".")[:2])))
else:
version = version
if arch is None:
arch = _mac_arch(cpu_arch)
else:
arch = arch
if (10, 0) <= version and version < (11, 0):
# Prior to Mac OS 11, each yearly release of Mac OS bumped the
# "minor" version number. The major version was always 10.
for minor_version in range(version[1], -1, -1):
compat_version = 10, minor_version
binary_formats = _mac_binary_formats(compat_version, arch)
for binary_format in binary_formats:
yield "macosx_{major}_{minor}_{binary_format}".format(
major=10, minor=minor_version, binary_format=binary_format
)
if version >= (11, 0):
# Starting with Mac OS 11, each yearly release bumps the major version
# number. The minor versions are now the midyear updates.
for major_version in range(version[0], 10, -1):
compat_version = major_version, 0
binary_formats = _mac_binary_formats(compat_version, arch)
for binary_format in binary_formats:
yield "macosx_{major}_{minor}_{binary_format}".format(
major=major_version, minor=0, binary_format=binary_format
)
if version >= (11, 0):
# Mac OS 11 on x86_64 is compatible with binaries from previous releases.
# Arm64 support was introduced in 11.0, so no Arm binaries from previous
# releases exist.
#
# However, the "universal2" binary format can have a
# macOS version earlier than 11.0 when the x86_64 part of the binary supports
# that version of macOS.
if arch == "x86_64":
for minor_version in range(16, 3, -1):
compat_version = 10, minor_version
binary_formats = _mac_binary_formats(compat_version, arch)
for binary_format in binary_formats:
yield "macosx_{major}_{minor}_{binary_format}".format(
major=compat_version[0],
minor=compat_version[1],
binary_format=binary_format,
)
else:
for minor_version in range(16, 3, -1):
compat_version = 10, minor_version
binary_format = "universal2"
yield "macosx_{major}_{minor}_{binary_format}".format(
major=compat_version[0],
minor=compat_version[1],
binary_format=binary_format,
)
def _linux_platforms(is_32bit: bool = _32_BIT_INTERPRETER) -> Iterator[str]:
linux = _normalize_string(sysconfig.get_platform())
if is_32bit:
if linux == "linux_x86_64":
linux = "linux_i686"
elif linux == "linux_aarch64":
linux = "linux_armv7l"
_, arch = linux.split("_", 1)
yield from _manylinux.platform_tags(linux, arch)
yield from _musllinux.platform_tags(arch)
yield linux
def _generic_platforms() -> Iterator[str]:
yield _normalize_string(sysconfig.get_platform())
def _platform_tags() -> Iterator[str]:
"""
Provides the platform tags for this installation.
"""
if platform.system() == "Darwin":
return mac_platforms()
elif platform.system() == "Linux":
return _linux_platforms()
else:
return _generic_platforms()
def interpreter_name() -> str:
"""
Returns the name of the running interpreter.
"""
name = sys.implementation.name
return INTERPRETER_SHORT_NAMES.get(name) or name
def interpreter_version(*, warn: bool = False) -> str:
"""
Returns the version of the running interpreter.
"""
version = _get_config_var("py_version_nodot", warn=warn)
if version:
version = str(version)
else:
version = _version_nodot(sys.version_info[:2])
return version
def _version_nodot(version: PythonVersion) -> str:
return "".join(map(str, version))
def sys_tags(*, warn: bool = False) -> Iterator[Tag]:
"""
Returns the sequence of tag triples for the running interpreter.
The order of the sequence corresponds to priority order for the
interpreter, from most to least important.
"""
interp_name = interpreter_name()
if interp_name == "cp":
yield from cpython_tags(warn=warn)
else:
yield from generic_tags()
yield from compatible_tags()

View file

@ -1,136 +0,0 @@
# This file is dual licensed under the terms of the Apache License, Version
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
# for complete details.
import re
from typing import FrozenSet, NewType, Tuple, Union, cast
from .tags import Tag, parse_tag
from .version import InvalidVersion, Version
BuildTag = Union[Tuple[()], Tuple[int, str]]
NormalizedName = NewType("NormalizedName", str)
class InvalidWheelFilename(ValueError):
"""
An invalid wheel filename was found, users should refer to PEP 427.
"""
class InvalidSdistFilename(ValueError):
"""
An invalid sdist filename was found, users should refer to the packaging user guide.
"""
_canonicalize_regex = re.compile(r"[-_.]+")
# PEP 427: The build number must start with a digit.
_build_tag_regex = re.compile(r"(\d+)(.*)")
def canonicalize_name(name: str) -> NormalizedName:
# This is taken from PEP 503.
value = _canonicalize_regex.sub("-", name).lower()
return cast(NormalizedName, value)
def canonicalize_version(version: Union[Version, str]) -> str:
"""
This is very similar to Version.__str__, but has one subtle difference
with the way it handles the release segment.
"""
if isinstance(version, str):
try:
parsed = Version(version)
except InvalidVersion:
# Legacy versions cannot be normalized
return version
else:
parsed = version
parts = []
# Epoch
if parsed.epoch != 0:
parts.append(f"{parsed.epoch}!")
# Release segment
# NB: This strips trailing '.0's to normalize
parts.append(re.sub(r"(\.0)+$", "", ".".join(str(x) for x in parsed.release)))
# Pre-release
if parsed.pre is not None:
parts.append("".join(str(x) for x in parsed.pre))
# Post-release
if parsed.post is not None:
parts.append(f".post{parsed.post}")
# Development release
if parsed.dev is not None:
parts.append(f".dev{parsed.dev}")
# Local version segment
if parsed.local is not None:
parts.append(f"+{parsed.local}")
return "".join(parts)
def parse_wheel_filename(
filename: str,
) -> Tuple[NormalizedName, Version, BuildTag, FrozenSet[Tag]]:
if not filename.endswith(".whl"):
raise InvalidWheelFilename(
f"Invalid wheel filename (extension must be '.whl'): {filename}"
)
filename = filename[:-4]
dashes = filename.count("-")
if dashes not in (4, 5):
raise InvalidWheelFilename(
f"Invalid wheel filename (wrong number of parts): {filename}"
)
parts = filename.split("-", dashes - 2)
name_part = parts[0]
# See PEP 427 for the rules on escaping the project name
if "__" in name_part or re.match(r"^[\w\d._]*$", name_part, re.UNICODE) is None:
raise InvalidWheelFilename(f"Invalid project name: {filename}")
name = canonicalize_name(name_part)
version = Version(parts[1])
if dashes == 5:
build_part = parts[2]
build_match = _build_tag_regex.match(build_part)
if build_match is None:
raise InvalidWheelFilename(
f"Invalid build number: {build_part} in '{filename}'"
)
build = cast(BuildTag, (int(build_match.group(1)), build_match.group(2)))
else:
build = ()
tags = parse_tag(parts[-1])
return (name, version, build, tags)
def parse_sdist_filename(filename: str) -> Tuple[NormalizedName, Version]:
if filename.endswith(".tar.gz"):
file_stem = filename[: -len(".tar.gz")]
elif filename.endswith(".zip"):
file_stem = filename[: -len(".zip")]
else:
raise InvalidSdistFilename(
f"Invalid sdist filename (extension must be '.tar.gz' or '.zip'):"
f" {filename}"
)
# We are requiring a PEP 440 version, which cannot contain dashes,
# so we split on the last dash.
name_part, sep, version_part = file_stem.rpartition("-")
if not sep:
raise InvalidSdistFilename(f"Invalid sdist filename: {filename}")
name = canonicalize_name(name_part)
version = Version(version_part)
return (name, version)

View file

@ -1,504 +0,0 @@
# This file is dual licensed under the terms of the Apache License, Version
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
# for complete details.
import collections
import itertools
import re
import warnings
from typing import Callable, Iterator, List, Optional, SupportsInt, Tuple, Union
from ._structures import Infinity, InfinityType, NegativeInfinity, NegativeInfinityType
__all__ = ["parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN"]
InfiniteTypes = Union[InfinityType, NegativeInfinityType]
PrePostDevType = Union[InfiniteTypes, Tuple[str, int]]
SubLocalType = Union[InfiniteTypes, int, str]
LocalType = Union[
NegativeInfinityType,
Tuple[
Union[
SubLocalType,
Tuple[SubLocalType, str],
Tuple[NegativeInfinityType, SubLocalType],
],
...,
],
]
CmpKey = Tuple[
int, Tuple[int, ...], PrePostDevType, PrePostDevType, PrePostDevType, LocalType
]
LegacyCmpKey = Tuple[int, Tuple[str, ...]]
VersionComparisonMethod = Callable[
[Union[CmpKey, LegacyCmpKey], Union[CmpKey, LegacyCmpKey]], bool
]
_Version = collections.namedtuple(
"_Version", ["epoch", "release", "dev", "pre", "post", "local"]
)
def parse(version: str) -> Union["LegacyVersion", "Version"]:
"""
Parse the given version string and return either a :class:`Version` object
or a :class:`LegacyVersion` object depending on if the given version is
a valid PEP 440 version or a legacy version.
"""
try:
return Version(version)
except InvalidVersion:
return LegacyVersion(version)
class InvalidVersion(ValueError):
"""
An invalid version was found, users should refer to PEP 440.
"""
class _BaseVersion:
_key: Union[CmpKey, LegacyCmpKey]
def __hash__(self) -> int:
return hash(self._key)
# Please keep the duplicated `isinstance` check
# in the six comparisons hereunder
# unless you find a way to avoid adding overhead function calls.
def __lt__(self, other: "_BaseVersion") -> bool:
if not isinstance(other, _BaseVersion):
return NotImplemented
return self._key < other._key
def __le__(self, other: "_BaseVersion") -> bool:
if not isinstance(other, _BaseVersion):
return NotImplemented
return self._key <= other._key
def __eq__(self, other: object) -> bool:
if not isinstance(other, _BaseVersion):
return NotImplemented
return self._key == other._key
def __ge__(self, other: "_BaseVersion") -> bool:
if not isinstance(other, _BaseVersion):
return NotImplemented
return self._key >= other._key
def __gt__(self, other: "_BaseVersion") -> bool:
if not isinstance(other, _BaseVersion):
return NotImplemented
return self._key > other._key
def __ne__(self, other: object) -> bool:
if not isinstance(other, _BaseVersion):
return NotImplemented
return self._key != other._key
class LegacyVersion(_BaseVersion):
def __init__(self, version: str) -> None:
self._version = str(version)
self._key = _legacy_cmpkey(self._version)
warnings.warn(
"Creating a LegacyVersion has been deprecated and will be "
"removed in the next major release",
DeprecationWarning,
)
def __str__(self) -> str:
return self._version
def __repr__(self) -> str:
return f"<LegacyVersion('{self}')>"
@property
def public(self) -> str:
return self._version
@property
def base_version(self) -> str:
return self._version
@property
def epoch(self) -> int:
return -1
@property
def release(self) -> None:
return None
@property
def pre(self) -> None:
return None
@property
def post(self) -> None:
return None
@property
def dev(self) -> None:
return None
@property
def local(self) -> None:
return None
@property
def is_prerelease(self) -> bool:
return False
@property
def is_postrelease(self) -> bool:
return False
@property
def is_devrelease(self) -> bool:
return False
_legacy_version_component_re = re.compile(r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE)
_legacy_version_replacement_map = {
"pre": "c",
"preview": "c",
"-": "final-",
"rc": "c",
"dev": "@",
}
def _parse_version_parts(s: str) -> Iterator[str]:
for part in _legacy_version_component_re.split(s):
part = _legacy_version_replacement_map.get(part, part)
if not part or part == ".":
continue
if part[:1] in "0123456789":
# pad for numeric comparison
yield part.zfill(8)
else:
yield "*" + part
# ensure that alpha/beta/candidate are before final
yield "*final"
def _legacy_cmpkey(version: str) -> LegacyCmpKey:
# We hardcode an epoch of -1 here. A PEP 440 version can only have a epoch
# greater than or equal to 0. This will effectively put the LegacyVersion,
# which uses the defacto standard originally implemented by setuptools,
# as before all PEP 440 versions.
epoch = -1
# This scheme is taken from pkg_resources.parse_version setuptools prior to
# it's adoption of the packaging library.
parts: List[str] = []
for part in _parse_version_parts(version.lower()):
if part.startswith("*"):
# remove "-" before a prerelease tag
if part < "*final":
while parts and parts[-1] == "*final-":
parts.pop()
# remove trailing zeros from each series of numeric parts
while parts and parts[-1] == "00000000":
parts.pop()
parts.append(part)
return epoch, tuple(parts)
# Deliberately not anchored to the start and end of the string, to make it
# easier for 3rd party code to reuse
VERSION_PATTERN = r"""
v?
(?:
(?:(?P<epoch>[0-9]+)!)? # epoch
(?P<release>[0-9]+(?:\.[0-9]+)*) # release segment
(?P<pre> # pre-release
[-_\.]?
(?P<pre_l>(a|b|c|rc|alpha|beta|pre|preview))
[-_\.]?
(?P<pre_n>[0-9]+)?
)?
(?P<post> # post release
(?:-(?P<post_n1>[0-9]+))
|
(?:
[-_\.]?
(?P<post_l>post|rev|r)
[-_\.]?
(?P<post_n2>[0-9]+)?
)
)?
(?P<dev> # dev release
[-_\.]?
(?P<dev_l>dev)
[-_\.]?
(?P<dev_n>[0-9]+)?
)?
)
(?:\+(?P<local>[a-z0-9]+(?:[-_\.][a-z0-9]+)*))? # local version
"""
class Version(_BaseVersion):
_regex = re.compile(r"^\s*" + VERSION_PATTERN + r"\s*$", re.VERBOSE | re.IGNORECASE)
def __init__(self, version: str) -> None:
# Validate the version and parse it into pieces
match = self._regex.search(version)
if not match:
raise InvalidVersion(f"Invalid version: '{version}'")
# Store the parsed out pieces of the version
self._version = _Version(
epoch=int(match.group("epoch")) if match.group("epoch") else 0,
release=tuple(int(i) for i in match.group("release").split(".")),
pre=_parse_letter_version(match.group("pre_l"), match.group("pre_n")),
post=_parse_letter_version(
match.group("post_l"), match.group("post_n1") or match.group("post_n2")
),
dev=_parse_letter_version(match.group("dev_l"), match.group("dev_n")),
local=_parse_local_version(match.group("local")),
)
# Generate a key which will be used for sorting
self._key = _cmpkey(
self._version.epoch,
self._version.release,
self._version.pre,
self._version.post,
self._version.dev,
self._version.local,
)
def __repr__(self) -> str:
return f"<Version('{self}')>"
def __str__(self) -> str:
parts = []
# Epoch
if self.epoch != 0:
parts.append(f"{self.epoch}!")
# Release segment
parts.append(".".join(str(x) for x in self.release))
# Pre-release
if self.pre is not None:
parts.append("".join(str(x) for x in self.pre))
# Post-release
if self.post is not None:
parts.append(f".post{self.post}")
# Development release
if self.dev is not None:
parts.append(f".dev{self.dev}")
# Local version segment
if self.local is not None:
parts.append(f"+{self.local}")
return "".join(parts)
@property
def epoch(self) -> int:
_epoch: int = self._version.epoch
return _epoch
@property
def release(self) -> Tuple[int, ...]:
_release: Tuple[int, ...] = self._version.release
return _release
@property
def pre(self) -> Optional[Tuple[str, int]]:
_pre: Optional[Tuple[str, int]] = self._version.pre
return _pre
@property
def post(self) -> Optional[int]:
return self._version.post[1] if self._version.post else None
@property
def dev(self) -> Optional[int]:
return self._version.dev[1] if self._version.dev else None
@property
def local(self) -> Optional[str]:
if self._version.local:
return ".".join(str(x) for x in self._version.local)
else:
return None
@property
def public(self) -> str:
return str(self).split("+", 1)[0]
@property
def base_version(self) -> str:
parts = []
# Epoch
if self.epoch != 0:
parts.append(f"{self.epoch}!")
# Release segment
parts.append(".".join(str(x) for x in self.release))
return "".join(parts)
@property
def is_prerelease(self) -> bool:
return self.dev is not None or self.pre is not None
@property
def is_postrelease(self) -> bool:
return self.post is not None
@property
def is_devrelease(self) -> bool:
return self.dev is not None
@property
def major(self) -> int:
return self.release[0] if len(self.release) >= 1 else 0
@property
def minor(self) -> int:
return self.release[1] if len(self.release) >= 2 else 0
@property
def micro(self) -> int:
return self.release[2] if len(self.release) >= 3 else 0
def _parse_letter_version(
letter: str, number: Union[str, bytes, SupportsInt]
) -> Optional[Tuple[str, int]]:
if letter:
# We consider there to be an implicit 0 in a pre-release if there is
# not a numeral associated with it.
if number is None:
number = 0
# We normalize any letters to their lower case form
letter = letter.lower()
# We consider some words to be alternate spellings of other words and
# in those cases we want to normalize the spellings to our preferred
# spelling.
if letter == "alpha":
letter = "a"
elif letter == "beta":
letter = "b"
elif letter in ["c", "pre", "preview"]:
letter = "rc"
elif letter in ["rev", "r"]:
letter = "post"
return letter, int(number)
if not letter and number:
# We assume if we are given a number, but we are not given a letter
# then this is using the implicit post release syntax (e.g. 1.0-1)
letter = "post"
return letter, int(number)
return None
_local_version_separators = re.compile(r"[\._-]")
def _parse_local_version(local: str) -> Optional[LocalType]:
"""
Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve").
"""
if local is not None:
return tuple(
part.lower() if not part.isdigit() else int(part)
for part in _local_version_separators.split(local)
)
return None
def _cmpkey(
epoch: int,
release: Tuple[int, ...],
pre: Optional[Tuple[str, int]],
post: Optional[Tuple[str, int]],
dev: Optional[Tuple[str, int]],
local: Optional[Tuple[SubLocalType]],
) -> CmpKey:
# When we compare a release version, we want to compare it with all of the
# trailing zeros removed. So we'll use a reverse the list, drop all the now
# leading zeros until we come to something non zero, then take the rest
# re-reverse it back into the correct order and make it a tuple and use
# that for our sorting key.
_release = tuple(
reversed(list(itertools.dropwhile(lambda x: x == 0, reversed(release))))
)
# We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
# We'll do this by abusing the pre segment, but we _only_ want to do this
# if there is not a pre or a post segment. If we have one of those then
# the normal sorting rules will handle this case correctly.
if pre is None and post is None and dev is not None:
_pre: PrePostDevType = NegativeInfinity
# Versions without a pre-release (except as noted above) should sort after
# those with one.
elif pre is None:
_pre = Infinity
else:
_pre = pre
# Versions without a post segment should sort before those with one.
if post is None:
_post: PrePostDevType = NegativeInfinity
else:
_post = post
# Versions without a development segment should sort after those with one.
if dev is None:
_dev: PrePostDevType = Infinity
else:
_dev = dev
if local is None:
# Versions without a local segment should sort before those with one.
_local: LocalType = NegativeInfinity
else:
# Versions with a local segment need that segment parsed to implement
# the sorting rules in PEP440.
# - Alpha numeric segments sort before numeric segments
# - Alpha numeric segments sort lexicographically
# - Numeric segments sort numerically
# - Shorter versions sort before longer versions when the prefixes
# match exactly
_local = tuple(
(i, "") if isinstance(i, int) else (NegativeInfinity, i) for i in local
)
return epoch, _release, _pre, _post, _dev, _local

View file

@ -1,22 +0,0 @@
Metadata-Version: 1.1
Name: ply
Version: 3.10
Summary: Python Lex & Yacc
Home-page: http://www.dabeaz.com/ply/
Author: David Beazley
Author-email: dave@dabeaz.com
License: BSD
Description:
PLY is yet another implementation of lex and yacc for Python. Some notable
features include the fact that its implemented entirely in Python and it
uses LALR(1) parsing which is efficient and well suited for larger grammars.
PLY provides most of the standard lex/yacc features including support for empty
productions, precedence rules, error recovery, and support for ambiguous grammars.
PLY is extremely easy to use and provides very extensive error checking.
It is compatible with both Python 2 and Python 3.
Platform: UNKNOWN
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 2

View file

@ -1,172 +0,0 @@
ANNOUNCE
CHANGES
MANIFEST.in
README.md
TODO
setup.cfg
setup.py
doc/internal.html
doc/makedoc.py
doc/ply.html
example/README
example/cleanup.sh
example/BASIC/README
example/BASIC/basic.py
example/BASIC/basiclex.py
example/BASIC/basiclog.py
example/BASIC/basinterp.py
example/BASIC/basparse.py
example/BASIC/dim.bas
example/BASIC/func.bas
example/BASIC/gcd.bas
example/BASIC/gosub.bas
example/BASIC/hello.bas
example/BASIC/linear.bas
example/BASIC/maxsin.bas
example/BASIC/powers.bas
example/BASIC/rand.bas
example/BASIC/sales.bas
example/BASIC/sears.bas
example/BASIC/sqrt1.bas
example/BASIC/sqrt2.bas
example/GardenSnake/GardenSnake.py
example/GardenSnake/README
example/ansic/README
example/ansic/clex.py
example/ansic/cparse.py
example/calc/calc.py
example/calcdebug/calc.py
example/calceof/calc.py
example/classcalc/calc.py
example/closurecalc/calc.py
example/hedit/hedit.py
example/newclasscalc/calc.py
example/optcalc/README
example/optcalc/calc.py
example/unicalc/calc.py
example/yply/README
example/yply/ylex.py
example/yply/yparse.py
example/yply/yply.py
ply/__init__.py
ply/cpp.py
ply/ctokens.py
ply/lex.py
ply/yacc.py
ply/ygen.py
ply.egg-info/PKG-INFO
ply.egg-info/SOURCES.txt
ply.egg-info/dependency_links.txt
ply.egg-info/top_level.txt
test/README
test/calclex.py
test/cleanup.sh
test/lex_closure.py
test/lex_doc1.py
test/lex_dup1.py
test/lex_dup2.py
test/lex_dup3.py
test/lex_empty.py
test/lex_error1.py
test/lex_error2.py
test/lex_error3.py
test/lex_error4.py
test/lex_hedit.py
test/lex_ignore.py
test/lex_ignore2.py
test/lex_literal1.py
test/lex_literal2.py
test/lex_literal3.py
test/lex_many_tokens.py
test/lex_module.py
test/lex_module_import.py
test/lex_object.py
test/lex_opt_alias.py
test/lex_optimize.py
test/lex_optimize2.py
test/lex_optimize3.py
test/lex_re1.py
test/lex_re2.py
test/lex_re3.py
test/lex_rule1.py
test/lex_rule2.py
test/lex_rule3.py
test/lex_state1.py
test/lex_state2.py
test/lex_state3.py
test/lex_state4.py
test/lex_state5.py
test/lex_state_noerror.py
test/lex_state_norule.py
test/lex_state_try.py
test/lex_token1.py
test/lex_token2.py
test/lex_token3.py
test/lex_token4.py
test/lex_token5.py
test/lex_token_dup.py
test/testlex.py
test/testyacc.py
test/yacc_badargs.py
test/yacc_badid.py
test/yacc_badprec.py
test/yacc_badprec2.py
test/yacc_badprec3.py
test/yacc_badrule.py
test/yacc_badtok.py
test/yacc_dup.py
test/yacc_error1.py
test/yacc_error2.py
test/yacc_error3.py
test/yacc_error4.py
test/yacc_error5.py
test/yacc_error6.py
test/yacc_error7.py
test/yacc_inf.py
test/yacc_literal.py
test/yacc_misplaced.py
test/yacc_missing1.py
test/yacc_nested.py
test/yacc_nodoc.py
test/yacc_noerror.py
test/yacc_nop.py
test/yacc_notfunc.py
test/yacc_notok.py
test/yacc_prec1.py
test/yacc_rr.py
test/yacc_rr_unused.py
test/yacc_simple.py
test/yacc_sr.py
test/yacc_term1.py
test/yacc_unicode_literals.py
test/yacc_unused.py
test/yacc_unused_rule.py
test/yacc_uprec.py
test/yacc_uprec2.py
test/pkg_test1/__init__.py
test/pkg_test1/parsing/__init__.py
test/pkg_test1/parsing/calclex.py
test/pkg_test1/parsing/calcparse.py
test/pkg_test2/__init__.py
test/pkg_test2/parsing/__init__.py
test/pkg_test2/parsing/calclex.py
test/pkg_test2/parsing/calcparse.py
test/pkg_test3/__init__.py
test/pkg_test3/generated/__init__.py
test/pkg_test3/parsing/__init__.py
test/pkg_test3/parsing/calclex.py
test/pkg_test3/parsing/calcparse.py
test/pkg_test4/__init__.py
test/pkg_test4/parsing/__init__.py
test/pkg_test4/parsing/calclex.py
test/pkg_test4/parsing/calcparse.py
test/pkg_test5/__init__.py
test/pkg_test5/parsing/__init__.py
test/pkg_test5/parsing/calclex.py
test/pkg_test5/parsing/calcparse.py
test/pkg_test6/__init__.py
test/pkg_test6/parsing/__init__.py
test/pkg_test6/parsing/calclex.py
test/pkg_test6/parsing/calcparse.py
test/pkg_test6/parsing/expression.py
test/pkg_test6/parsing/statement.py

View file

@ -1 +0,0 @@
ply

View file

@ -1,18 +0,0 @@
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View file

@ -1,104 +0,0 @@
Metadata-Version: 2.1
Name: pyparsing
Version: 2.4.7
Summary: Python parsing module
Home-page: https://github.com/pyparsing/pyparsing/
Author: Paul McGuire
Author-email: ptmcg@users.sourceforge.net
License: MIT License
Download-URL: https://pypi.org/project/pyparsing/
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Information Technology
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Requires-Python: >=2.6, !=3.0.*, !=3.1.*, !=3.2.*
PyParsing -- A Python Parsing Module
====================================
|Build Status|
Introduction
============
The pyparsing module is an alternative approach to creating and
executing simple grammars, vs. the traditional lex/yacc approach, or the
use of regular expressions. The pyparsing module provides a library of
classes that client code uses to construct the grammar directly in
Python code.
*[Since first writing this description of pyparsing in late 2003, this
technique for developing parsers has become more widespread, under the
name Parsing Expression Grammars - PEGs. See more information on PEGs at*
https://en.wikipedia.org/wiki/Parsing_expression_grammar *.]*
Here is a program to parse ``"Hello, World!"`` (or any greeting of the form
``"salutation, addressee!"``):
.. code:: python
from pyparsing import Word, alphas
greet = Word(alphas) + "," + Word(alphas) + "!"
hello = "Hello, World!"
print(hello, "->", greet.parseString(hello))
The program outputs the following::
Hello, World! -> ['Hello', ',', 'World', '!']
The Python representation of the grammar is quite readable, owing to the
self-explanatory class names, and the use of '+', '|' and '^' operator
definitions.
The parsed results returned from ``parseString()`` can be accessed as a
nested list, a dictionary, or an object with named attributes.
The pyparsing module handles some of the problems that are typically
vexing when writing text parsers:
- extra or missing whitespace (the above program will also handle ``"Hello,World!"``, ``"Hello , World !"``, etc.)
- quoted strings
- embedded comments
The examples directory includes a simple SQL parser, simple CORBA IDL
parser, a config file parser, a chemical formula parser, and a four-
function algebraic notation parser, among many others.
Documentation
=============
There are many examples in the online docstrings of the classes
and methods in pyparsing. You can find them compiled into online docs
at https://pyparsing-docs.readthedocs.io/en/latest/. Additional
documentation resources and project info are listed in the online
GitHub wiki, at https://github.com/pyparsing/pyparsing/wiki. An
entire directory of examples is at
https://github.com/pyparsing/pyparsing/tree/master/examples.
License
=======
MIT License. See header of pyparsing.py
History
=======
See CHANGES file.
.. |Build Status| image:: https://travis-ci.org/pyparsing/pyparsing.svg?branch=master
:target: https://travis-ci.org/pyparsing/pyparsing

View file

@ -1,6 +0,0 @@
pyparsing.py,sha256=oxX_ZOz8t-eros-UWY7nJgcdUgD-rQ53Ck0qp7_v3Ig,273365
pyparsing-2.4.7.dist-info/LICENSE,sha256=ENUSChaAWAT_2otojCIL-06POXQbVzIGBNRVowngGXI,1023
pyparsing-2.4.7.dist-info/METADATA,sha256=Ry40soZZiZrAkSMQT_KU1_1REe6FKa5UWzbT6YA8Mxs,3636
pyparsing-2.4.7.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110
pyparsing-2.4.7.dist-info/top_level.txt,sha256=eUOjGzJVhlQ3WS2rFAy2mN3LX_7FKTM5GSJ04jfnLmU,10
pyparsing-2.4.7.dist-info/RECORD,,

View file

@ -1,6 +0,0 @@
Wheel-Version: 1.0
Generator: bdist_wheel (0.34.2)
Root-Is-Purelib: true
Tag: py2-none-any
Tag: py3-none-any

View file

@ -1 +0,0 @@
pyparsing

File diff suppressed because it is too large Load diff

Some files were not shown because too many files have changed in this diff Show more