diff --git a/.flake8 b/.flake8
deleted file mode 100644
index fba48b089d..0000000000
--- a/.flake8
+++ /dev/null
@@ -1,8 +0,0 @@
-# Black-compatible flake8 config
-
-[flake8]
-ignore = E203, E266, E402, E501, W503, C901
-max-line-length = 80
-max-complexity = 18
-select = B,C,E,F,W,T4,B9
-exclude = docs
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 71833dd993..79bdbf8761 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -1,15 +1,16 @@
default_language_version:
python: python3
repos:
- - repo: https://github.com/ambv/black
- rev: 22.3.0
+ - repo: https://github.com/astral-sh/ruff-pre-commit
+ rev: v0.15.8
hooks:
- - id: black
+ - id: ruff
+ args: [--fix]
+ - id: ruff-format
- repo: https://github.com/pre-commit/pre-commit-hooks
- rev: v2.0.0
+ rev: v5.0.0
hooks:
- id: check-merge-conflict
- - id: flake8
- id: debug-statements
exclude: "cumulusci/(utils/logging|cli/cci|tasks/robotframework/debugger/ui|cli/task|robotframework/utils).py"
- repo: https://github.com/Lucas-C/pre-commit-hooks-markup
@@ -17,11 +18,6 @@ repos:
hooks:
- id: rst-linter
exclude: "docs"
- - repo: https://github.com/pycqa/isort
- rev: 5.12.0
- hooks:
- - id: isort
- args: ["--profile", "black", "--filter-files"]
- repo: https://github.com/pre-commit/mirrors-prettier
rev: v3.1.0
hooks:
diff --git a/.prettierignore b/.prettierignore
index 06c37e2dbb..0c3bdba138 100644
--- a/.prettierignore
+++ b/.prettierignore
@@ -1,5 +1,13 @@
-Test*.yaml
+# Jinja-templated HTML: Prettier lowercases , which breaks
+# xml.etree.ElementTree.parse() in RobotLibDoc tests.
+cumulusci/tasks/robotframework/template.html
+
+# VCR cassettes are YAML-shaped but not reliably parseable by Prettier's YAML
+# formatter (multi-line quoted bodies, anchors, etc.).
+**/cassettes/**
+
+# Bundled third-party diagram assets (minified JS/CSS).
docs/diagram/
-**/*.min.js
-**/*.min.css
-cumulusci/files/templates/
\ No newline at end of file
+
+# Jinja-templated JSON/YAML shipped as project templates.
+cumulusci/files/templates/
diff --git a/.readthedocs.yml b/.readthedocs.yml
index 5be9040284..5091f06ea8 100644
--- a/.readthedocs.yml
+++ b/.readthedocs.yml
@@ -27,7 +27,7 @@ sphinx:
formats:
- pdf
- epub
- # Optionally declare the Python requirements required to build your docs
+ # Optionally declare the Python requirements required to build your docs
# python:
# install:
# - requirements: requirements_dev.txt
diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md
index c15c10694e..2cc9a8d22e 100644
--- a/CODE_OF_CONDUCT.md
+++ b/CODE_OF_CONDUCT.md
@@ -35,23 +35,23 @@ socioeconomic status, or other similar personal characteristics.
Examples of behavior that contributes to creating a positive environment
include:
-* Using welcoming and inclusive language
-* Being respectful of differing viewpoints and experiences
-* Gracefully accepting constructive criticism
-* Focusing on what is best for the community
-* Showing empathy toward other community members
+- Using welcoming and inclusive language
+- Being respectful of differing viewpoints and experiences
+- Gracefully accepting constructive criticism
+- Focusing on what is best for the community
+- Showing empathy toward other community members
Examples of unacceptable behavior by participants include:
-* The use of sexualized language or imagery and unwelcome sexual attention or
-advances
-* Personal attacks, insulting/derogatory comments, or trolling
-* Public or private harassment
-* Publishing, or threatening to publish, others' private information—such as
-a physical or electronic address—without explicit permission
-* Other conduct which could reasonably be considered inappropriate in a
-professional setting
-* Advocating for or encouraging any of the above behaviors
+- The use of sexualized language or imagery and unwelcome sexual attention or
+ advances
+- Personal attacks, insulting/derogatory comments, or trolling
+- Public or private harassment
+- Publishing, or threatening to publish, others' private information—such as
+ a physical or electronic address—without explicit permission
+- Other conduct which could reasonably be considered inappropriate in a
+ professional setting
+- Advocating for or encouraging any of the above behaviors
## Our Responsibilities
@@ -98,7 +98,7 @@ It includes adaptions and additions from [Go Community Code of Conduct][golang-c
This Code of Conduct is licensed under the [Creative Commons Attribution 3.0 License][cc-by-3-us].
-[contributor-covenant-home]: https://www.contributor-covenant.org (https://www.contributor-covenant.org/)
+[contributor-covenant-home]: https://www.contributor-covenant.org "https://www.contributor-covenant.org/"
[golang-coc]: https://golang.org/conduct
[cncf-coc]: https://github.com/cncf/foundation/blob/master/code-of-conduct.md
[microsoft-coc]: https://opensource.microsoft.com/codeofconduct/
diff --git a/Makefile b/Makefile
index 19b7ada463..b54cec51fc 100644
--- a/Makefile
+++ b/Makefile
@@ -40,20 +40,20 @@ clean-pyc: ## remove Python file artifacts
find . -name '__pycache__' -exec rm -fr {} +
clean-test: ## remove test and coverage artifacts
- rm -fr .tox/
rm -f .coverage
rm -fr htmlcov/
rm -f output.xml
rm -f report.html
-lint: ## check style with flake8
- flake8 cumulusci tests
+lint: ## check style with ruff and pyright
+ ruff check cumulusci tests
+ ruff format --check cumulusci tests
test: ## run tests quickly with the default Python
pytest
-test-all: ## run tests on every Python version with tox
- tox
+test-all: ## run tests on every Python version via CI matrix
+ @echo "Multi-version testing runs in CI. Use 'pytest' for local testing."
# Use CLASS_PATH to run coverage for a subset of tests.
# $ make coverage CLASS_PATH="cumulusci/core/tests"
@@ -82,7 +82,6 @@ servedocs: docs ## compile the docs watching for changes
watchmedo shell-command -p '*.rst' -c '$(MAKE) -C docs html' -R -D .
release: clean ## package and upload a release
- python utility/pin_dependencies.py
hatch build
hatch publish
@@ -97,15 +96,11 @@ tag: clean
git tag -a -m 'version $$(hatch version)' v$$(hatch version)
git push --follow-tags
-update-deps:
- echo Use the _Update Python Dependencies_ Github action for real releases
- pip-compile --upgrade --resolver=backtracking --output-file=requirements/prod.txt pyproject.toml
- pip-compile --upgrade --resolver=backtracking --output-file=requirements/dev.txt --all-extras pyproject.toml
+update-deps: ## update all dependencies via uv
+ uv lock --upgrade
-dev-install:
- python -m pip install --upgrade pip pip-tools setuptools
- pip-sync requirements/*.txt
- python -m pip install -e .
+dev-install: ## install development dependencies via uv
+ uv sync --group dev
schema:
python -c 'from cumulusci.utils.yaml import cumulusci_yml; open("cumulusci/schema/cumulusci.jsonschema.json", "w").write(cumulusci_yml.CumulusCIRoot.schema_json(indent=4))'
diff --git a/SECURITY.md b/SECURITY.md
index e31774df28..8249025739 100644
--- a/SECURITY.md
+++ b/SECURITY.md
@@ -4,4 +4,4 @@ Please report any security issue to [security@salesforce.com](mailto:security@sa
as soon as it is discovered. This library limits its runtime dependencies in
order to reduce the total cost of ownership as much as can be, but all consumers
should remain vigilant and have their security stakeholders review all third-party
-products (3PP) like this one and their dependencies.
\ No newline at end of file
+products (3PP) like this one and their dependencies.
diff --git a/cumulusci/__init__.py b/cumulusci/__init__.py
index 2db01717a4..01c39a428e 100644
--- a/cumulusci/__init__.py
+++ b/cumulusci/__init__.py
@@ -14,5 +14,5 @@
if sys.version_info < (3, 8): # pragma: no cover
raise Exception("CumulusCI requires Python 3.8+.")
-api.OrderedDict = dict
-bulk.OrderedDict = dict
+api.OrderedDict = dict # pyright: ignore[reportPrivateImportUsage]
+bulk.OrderedDict = dict # pyright: ignore[reportPrivateImportUsage]
diff --git a/cumulusci/cli/flow.py b/cumulusci/cli/flow.py
index 96bd8db9cf..3e3c2768e5 100644
--- a/cumulusci/cli/flow.py
+++ b/cumulusci/cli/flow.py
@@ -44,9 +44,9 @@ def flow_doc(runtime, project=False):
flows_by_group = group_items(flows)
flow_groups = sorted(
flows_by_group.keys(),
- key=lambda group: flow_info_groups.index(group)
- if group in flow_info_groups
- else 100,
+ key=lambda group: (
+ flow_info_groups.index(group) if group in flow_info_groups else 100
+ ),
)
for group in flow_groups:
diff --git a/cumulusci/cli/logger.py b/cumulusci/cli/logger.py
index 0e5461f829..e045ab43d6 100644
--- a/cumulusci/cli/logger.py
+++ b/cumulusci/cli/logger.py
@@ -1,4 +1,5 @@
-""" CLI logger """
+"""CLI logger"""
+
import logging
import os
import sys
diff --git a/cumulusci/cli/org.py b/cumulusci/cli/org.py
index 3d2d08fc7a..6951206429 100644
--- a/cumulusci/cli/org.py
+++ b/cumulusci/cli/org.py
@@ -34,6 +34,7 @@ def set_org_name(required):
`required` is a boolean for whether org_name is required
"""
+
# could be generalized to work for any mutex pair (or list) but no obvious need
def callback(ctx, param, value):
"""Callback which enforces mutex and 'required' behaviour (if required)."""
@@ -474,7 +475,6 @@ def org_prune(runtime, include_active=False):
org_shapes_skipped = []
active_orgs_skipped = []
for org_name in runtime.keychain.list_orgs():
-
org_config = runtime.keychain.get_org(org_name)
if org_name in predefined_scratch_configs:
diff --git a/cumulusci/cli/project.py b/cumulusci/cli/project.py
index 9111259b83..fc77fa693a 100644
--- a/cumulusci/cli/project.py
+++ b/cumulusci/cli/project.py
@@ -246,7 +246,6 @@ def init_from_context(context: Dict[str, object], echo: bool = False):
# Create sfdx-project.json
if not os.path.isfile("sfdx-project.json"):
-
sfdx_project = {
"packageDirectories": [{"path": "force-app", "default": True}],
"namespace": context["package_namespace"],
diff --git a/cumulusci/cli/runtime.py b/cumulusci/cli/runtime.py
index 9e596fa7da..8635bc7f36 100644
--- a/cumulusci/cli/runtime.py
+++ b/cumulusci/cli/runtime.py
@@ -23,7 +23,7 @@ def __init__(self, *args, **kwargs):
super(CliRuntime, self).__init__(*args, **kwargs)
except ConfigError as e:
raise click.UsageError(f"Config Error: {str(e)}")
- except (KeychainKeyNotFound) as e:
+ except KeychainKeyNotFound as e:
raise click.UsageError(f"Keychain Error: {str(e)}")
def get_keychain_class(self):
diff --git a/cumulusci/cli/service.py b/cumulusci/cli/service.py
index 6beaaf9fbf..d6590e3158 100644
--- a/cumulusci/cli/service.py
+++ b/cumulusci/cli/service.py
@@ -69,7 +69,7 @@ def service_list(runtime, plain, print_json):
console.print(table)
-class ConnectServiceCommand(click.MultiCommand):
+class ConnectServiceCommand(click.Group):
def _get_services_config(self, runtime):
return (
runtime.project_config.services
diff --git a/cumulusci/cli/task.py b/cumulusci/cli/task.py
index cfbe749b91..344f973a96 100644
--- a/cumulusci/cli/task.py
+++ b/cumulusci/cli/task.py
@@ -107,7 +107,7 @@ def task_info(runtime, task_name):
click.echo(rst2ansi(doc))
-class RunTaskCommand(click.MultiCommand):
+class RunTaskCommand(click.Group):
# options that are not task specific
global_options = {
"no-prompt": {
diff --git a/cumulusci/cli/tests/test_error.py b/cumulusci/cli/tests/test_error.py
index 094a767896..ccfb6c10bb 100644
--- a/cumulusci/cli/tests/test_error.py
+++ b/cumulusci/cli/tests/test_error.py
@@ -98,7 +98,9 @@ def test_error_gist(
)
webbrowser_open.assert_called_once_with(expected_gist_url)
- @pytest.mark.skipif(sys.version_info > (3, 11), reason="requires python3.10 or higher")
+ @pytest.mark.skipif(
+ sys.version_info > (3, 11), reason="requires python3.10 or higher"
+ )
@mock.patch("cumulusci.cli.error.platform")
@mock.patch("cumulusci.cli.error.sys")
@mock.patch("cumulusci.cli.error.datetime")
diff --git a/cumulusci/cli/tests/test_org.py b/cumulusci/cli/tests/test_org.py
index c85f2e0507..494dec3b4b 100644
--- a/cumulusci/cli/tests/test_org.py
+++ b/cumulusci/cli/tests/test_org.py
@@ -887,9 +887,9 @@ def get_org(orgname):
run_click_command(org.org_list, runtime=runtime, json_flag=False, plain=False)
- assert "Cannot load org config for `test1`" in str(
+ assert "Cannot load org config for `test1`" in str(echo.mock_calls), (
echo.mock_calls
- ), echo.mock_calls
+ )
assert "NOPE!" in str(echo.mock_calls), echo.mock_calls
assert "Cannot cleanup org cache dirs" in str(echo.mock_calls), echo.mock_calls
diff --git a/cumulusci/cli/tests/test_plan.py b/cumulusci/cli/tests/test_plan.py
index 1dbc55da38..d73ab432ed 100644
--- a/cumulusci/cli/tests/test_plan.py
+++ b/cumulusci/cli/tests/test_plan.py
@@ -148,16 +148,18 @@ def test_plan_info__config(self, cli_table, runtime):
run_click_command(
plan.plan_info, "plan 1", runtime=runtime, messages_only=False
)
- cli_table.assert_any_call(
- title="Config",
- data=[
- ["Key", "Value"],
- ["YAML Key", "plan 1"],
- ["Slug", "plan1_slug"],
- ["Tier", "primary"],
- ["Hidden?", False],
- ],
- ),
+ (
+ cli_table.assert_any_call(
+ title="Config",
+ data=[
+ ["Key", "Value"],
+ ["YAML Key", "plan 1"],
+ ["Slug", "plan1_slug"],
+ ["Tier", "primary"],
+ ["Hidden?", False],
+ ],
+ ),
+ )
@mock.patch("cumulusci.cli.plan.CliTable")
def test_plan_info__messages(self, cli_table, runtime):
@@ -182,17 +184,19 @@ def test_plan_info__preflight_checks(self, cli_table, runtime):
run_click_command(
plan.plan_info, "plan 1", runtime=runtime, messages_only=False
)
- cli_table.assert_any_call(
- title="Plan Preflights",
- data=[
- ["Action", "Message", "When"],
- [
- "error",
- "Test Package must be installed in your org.",
- "'test package' not in tasks.get_installed_packages()",
+ (
+ cli_table.assert_any_call(
+ title="Plan Preflights",
+ data=[
+ ["Action", "Message", "When"],
+ [
+ "error",
+ "Test Package must be installed in your org.",
+ "'test package' not in tasks.get_installed_packages()",
+ ],
],
- ],
- ),
+ ),
+ )
@mock.patch("cumulusci.cli.plan.CliTable")
def test_plan_info__step_preflight_checks(self, cli_table, runtime):
@@ -200,13 +204,15 @@ def test_plan_info__step_preflight_checks(self, cli_table, runtime):
run_click_command(
plan.plan_info, "plan 1", runtime=runtime, messages_only=False
)
- cli_table.assert_any_call(
- title="Step Preflights",
- data=[
- ["Step", "Action", "Message", "When"],
- [1, "error", "Danger Will Robinson!", "soon"],
- ],
- ),
+ (
+ cli_table.assert_any_call(
+ title="Step Preflights",
+ data=[
+ ["Step", "Action", "Message", "When"],
+ [1, "error", "Danger Will Robinson!", "soon"],
+ ],
+ ),
+ )
@mock.patch("cumulusci.cli.plan.CliTable")
def test_plan_info__steps(self, cli_table, runtime):
diff --git a/cumulusci/cli/tests/test_project.py b/cumulusci/cli/tests/test_project.py
index d296cdf838..511a02254f 100644
--- a/cumulusci/cli/tests/test_project.py
+++ b/cumulusci/cli/tests/test_project.py
@@ -230,6 +230,4 @@ def test_render_recursive(self):
- list
\x1b[1mdict:\x1b[0m
\x1b[1mkey:\x1b[0m value
- \x1b[1mstr:\x1b[0m str""" == "\n".join(
- out
- )
+ \x1b[1mstr:\x1b[0m str""" == "\n".join(out)
diff --git a/cumulusci/cli/ui.py b/cumulusci/cli/ui.py
index a809d8de42..651946c2a7 100644
--- a/cumulusci/cli/ui.py
+++ b/cumulusci/cli/ui.py
@@ -4,6 +4,7 @@
Classes:
CliTable: Pretty prints tabular data to stdout, via Rich's Console API
"""
+
import os
from typing import Any, List, Union
diff --git a/cumulusci/core/config/base_config.py b/cumulusci/core/config/base_config.py
index 130ef3ac96..7ebd788dbc 100644
--- a/cumulusci/core/config/base_config.py
+++ b/cumulusci/core/config/base_config.py
@@ -28,9 +28,9 @@ def __init__(self, config: Optional[dict] = None, keychain=None):
if not type_for_value:
warnings.warn(f"{k}: {v} not declared for {type(self)}")
if (v is not None) and (type_for_value is not None):
- assert isinstance(
- v, type_for_value
- ), f"{k}: {v} should be of type {type_for_value}, not {type(v)} for {type(self)}"
+ assert isinstance(v, type_for_value), (
+ f"{k}: {v} should be of type {type_for_value}, not {type(v)} for {type(self)}"
+ )
self.config = config.copy()
self._init_logger()
diff --git a/cumulusci/core/config/marketing_cloud_service_config.py b/cumulusci/core/config/marketing_cloud_service_config.py
index 61fc376ad9..14015575ee 100644
--- a/cumulusci/core/config/marketing_cloud_service_config.py
+++ b/cumulusci/core/config/marketing_cloud_service_config.py
@@ -10,7 +10,6 @@
class MarketingCloudServiceConfig(OAuth2ServiceConfig):
-
refresh_token: str
oauth2_client: str
soap_instance_url: str
diff --git a/cumulusci/core/config/project_config.py b/cumulusci/core/config/project_config.py
index 72c3850e49..780ed05532 100644
--- a/cumulusci/core/config/project_config.py
+++ b/cumulusci/core/config/project_config.py
@@ -141,9 +141,7 @@ def config_project_local_path(self) -> Optional[str]:
def _load_config(self):
"""Loads the configuration from YAML, if no override config was passed in initially."""
- if (
- self.config
- ): # any config being pre-set at init will short circuit out, but not a plain {}
+ if self.config: # any config being pre-set at init will short circuit out, but not a plain {}
return
# Verify that we're in a project
diff --git a/cumulusci/core/config/tests/_test_config_backwards_compatibility.py b/cumulusci/core/config/tests/_test_config_backwards_compatibility.py
index 339fc545a6..36f8c477aa 100644
--- a/cumulusci/core/config/tests/_test_config_backwards_compatibility.py
+++ b/cumulusci/core/config/tests/_test_config_backwards_compatibility.py
@@ -12,7 +12,6 @@ class TestConfigBackwardsCompatibility:
@patch.dict(os.environ)
def test_temporary_backwards_compatibility_hacks(self):
with pytest.warns(ClassMovedWarning):
-
from cumulusci.core.config.OrgConfig import OrgConfig
assert isinstance(OrgConfig, type)
diff --git a/cumulusci/core/config/tests/test_config.py b/cumulusci/core/config/tests/test_config.py
index 77973526dd..11ad6350b2 100644
--- a/cumulusci/core/config/tests/test_config.py
+++ b/cumulusci/core/config/tests/test_config.py
@@ -60,13 +60,16 @@ def test_getattr_toplevel_key(self):
def test_getattr_toplevel_key_missing(self):
config = BaseConfig()
config.config = {}
- with mock.patch(
- "cumulusci.core.config.base_config.STRICT_GETATTR", False
- ), pytest.warns(DeprecationWarning, match="foo"):
+ with (
+ mock.patch("cumulusci.core.config.base_config.STRICT_GETATTR", False),
+ pytest.warns(DeprecationWarning, match="foo"),
+ ):
assert config.foo is None
- with mock.patch(
- "cumulusci.core.config.base_config.STRICT_GETATTR", True
- ), pytest.deprecated_call(), pytest.raises(AssertionError):
+ with (
+ mock.patch("cumulusci.core.config.base_config.STRICT_GETATTR", True),
+ pytest.deprecated_call(),
+ pytest.raises(AssertionError),
+ ):
assert config.foo is None
def test_getattr_child_key(self):
@@ -77,9 +80,11 @@ def test_getattr_child_key(self):
def test_strict_getattr(self):
config = FakeConfig()
config.config = {"foo": {"bar": "baz"}}
- with mock.patch(
- "cumulusci.core.config.base_config.STRICT_GETATTR", "True"
- ), mock.patch("warnings.warn"), pytest.raises(AssertionError):
+ with (
+ mock.patch("cumulusci.core.config.base_config.STRICT_GETATTR", "True"),
+ mock.patch("warnings.warn"),
+ pytest.raises(AssertionError),
+ ):
print(config.jfiesojfieoj)
def test_getattr_child_parent_key_missing(self):
@@ -398,7 +403,7 @@ def test_repo_url_from_git(self, git_path):
git_path.return_value = git_config_file
repo_url = "https://github.com/foo/bar.git"
with open(git_config_file, "w") as f:
- f.writelines(['[remote "origin"]\n' f"\turl = {repo_url}"])
+ f.writelines([f'[remote "origin"]\n\turl = {repo_url}'])
config = BaseProjectConfig(UniversalConfig())
assert repo_url == config.repo_url
@@ -1365,7 +1370,6 @@ def test_orginfo_cache_dir_local(self):
)
with TemporaryDirectory() as t:
with mock.patch("cumulusci.tests.util.DummyKeychain.cache_dir", Path(t)):
-
with config.get_orginfo_cache_dir("bar") as directory:
assert str(t) in directory, (t, directory)
assert (
@@ -1390,9 +1394,9 @@ def test_is_person_accounts_enabled__not_enabled(self):
},
"test",
)
- assert (
- config._is_person_accounts_enabled is None
- ), "_is_person_accounts_enabled should be initialized as None"
+ assert config._is_person_accounts_enabled is None, (
+ "_is_person_accounts_enabled should be initialized as None"
+ )
responses.add(
"GET",
@@ -1427,9 +1431,9 @@ def test_is_person_accounts_enabled__is_enabled(self):
},
"test",
)
- assert (
- config._is_person_accounts_enabled is None
- ), "_is_person_accounts_enabled should be initialized as None"
+ assert config._is_person_accounts_enabled is None, (
+ "_is_person_accounts_enabled should be initialized as None"
+ )
responses.add(
"GET",
@@ -1464,9 +1468,9 @@ def test_is_multi_currency_enabled__not_enabled(self):
},
"test",
)
- assert (
- config._multiple_currencies_is_enabled is False
- ), "_multiple_currencies_is_enabled should be initialized as False"
+ assert config._multiple_currencies_is_enabled is False, (
+ "_multiple_currencies_is_enabled should be initialized as False"
+ )
# Login call.
responses.add(
@@ -1501,21 +1505,21 @@ def test_is_multi_currency_enabled__not_enabled(self):
# Check 1: is_multiple_currencies_enabled should be False since the CurrencyType describe gives a 404.
actual = config.is_multiple_currencies_enabled
- assert (
- actual is False
- ), "config.is_multiple_currencies_enabled should be False since the CurrencyType describe returns a 404."
- assert (
- config._multiple_currencies_is_enabled is False
- ), "config._multiple_currencies_is_enabled should still be False since the CurrencyType describe returns a 404."
+ assert actual is False, (
+ "config.is_multiple_currencies_enabled should be False since the CurrencyType describe returns a 404."
+ )
+ assert config._multiple_currencies_is_enabled is False, (
+ "config._multiple_currencies_is_enabled should still be False since the CurrencyType describe returns a 404."
+ )
# Check 2: We should still get the CurrencyType describe since we never cached that multiple currencies is enabled.
actual = config.is_multiple_currencies_enabled
- assert (
- actual is False
- ), "config.is_multiple_currencies_enabled should be False since the CurrencyType describe returns a 404."
- assert (
- config._multiple_currencies_is_enabled is False
- ), "config._multiple_currencies_is_enabled should still be False since the CurrencyType describe returns a 404."
+ assert actual is False, (
+ "config.is_multiple_currencies_enabled should be False since the CurrencyType describe returns a 404."
+ )
+ assert config._multiple_currencies_is_enabled is False, (
+ "config._multiple_currencies_is_enabled should still be False since the CurrencyType describe returns a 404."
+ )
# We should have made 3 calls: 1 token call + 2 describe calls
assert len(responses.calls) == 1 + 2
@@ -1531,9 +1535,9 @@ def test_is_multi_currency_enabled__is_enabled(self):
"test",
)
- assert (
- config._multiple_currencies_is_enabled is False
- ), "_multiple_currencies_is_enabled should be initialized as False"
+ assert config._multiple_currencies_is_enabled is False, (
+ "_multiple_currencies_is_enabled should be initialized as False"
+ )
# Token call.
responses.add(
@@ -1554,21 +1558,21 @@ def test_is_multi_currency_enabled__is_enabled(self):
# Check 1: is_multiple_currencies_enabled should be True since the CurrencyType describe gives a 200.
actual = config.is_multiple_currencies_enabled
- assert (
- actual is True
- ), "config.is_multiple_currencies_enabled should be True since the CurrencyType describe returns a 200."
- assert (
- config._multiple_currencies_is_enabled is True
- ), "config._multiple_currencies_is_enabled should be True since the CurrencyType describe returns a 200."
+ assert actual is True, (
+ "config.is_multiple_currencies_enabled should be True since the CurrencyType describe returns a 200."
+ )
+ assert config._multiple_currencies_is_enabled is True, (
+ "config._multiple_currencies_is_enabled should be True since the CurrencyType describe returns a 200."
+ )
# Check 2: We should have cached that Multiple Currencies is enabled, so we should not make a 2nd descrobe call. This is ok to cache since Multiple Currencies cannot be disabled.
actual = config.is_multiple_currencies_enabled
- assert (
- actual is True
- ), "config.is_multiple_currencies_enabled should be True since the our cached value in _multiple_currencies_is_enabled is True."
- assert (
- config._multiple_currencies_is_enabled is True
- ), "config._multiple_currencies_is_enabled should still be True."
+ assert actual is True, (
+ "config.is_multiple_currencies_enabled should be True since the our cached value in _multiple_currencies_is_enabled is True."
+ )
+ assert config._multiple_currencies_is_enabled is True, (
+ "config._multiple_currencies_is_enabled should still be True."
+ )
# We should have made 2 calls: 1 token call + 1 describe call
assert len(responses.calls) == 1 + 1
@@ -1609,9 +1613,9 @@ def test_is_advanced_currency_management_enabled__multiple_currencies_not_enable
# is_advanced_currency_management_enabled should be False since:
# - DatedConversionRate describe gives a 404 implying the Sobject is not exposed becuase Multiple Currencies is not enabled.
actual = config.is_advanced_currency_management_enabled
- assert (
- actual is False
- ), "config.is_advanced_currency_management_enabled should be False since the describe gives a 404."
+ assert actual is False, (
+ "config.is_advanced_currency_management_enabled should be False since the describe gives a 404."
+ )
# We should have made 2 calls: 1 token call + 1 describe call
assert len(responses.calls) == 1 + 1
@@ -1649,9 +1653,9 @@ def test_is_advanced_currency_management_enabled__multiple_currencies_enabled__a
# - DatedConversionRate describe gives a 200, so the Sobject is exposed (because Multiple Currencies is enabled).
# - But DatedConversionRate is not creatable implying ACM is not enabled.
actual = config.is_advanced_currency_management_enabled
- assert (
- actual is False
- ), 'config.is_advanced_currency_management_enabled should be False since though the describe gives a 200, the describe is not "createable".'
+ assert actual is False, (
+ 'config.is_advanced_currency_management_enabled should be False since though the describe gives a 200, the describe is not "createable".'
+ )
# We should have made 2 calls: 1 token call + 1 describe call
assert len(responses.calls) == 1 + 1
@@ -1689,9 +1693,9 @@ def test_is_advanced_currency_management_enabled__multiple_currencies_enabled__a
# - DatedConversionRate describe gives a 200, so the Sobject is exposed (because Multiple Currencies is enabled).
# - But DatedConversionRate is not creatable implying ACM is not enabled.
actual = config.is_advanced_currency_management_enabled
- assert (
- actual is True
- ), 'config.is_advanced_currency_management_enabled should be False since both the describe gives a 200 and the describe is "createable".'
+ assert actual is True, (
+ 'config.is_advanced_currency_management_enabled should be False since both the describe gives a 200 and the describe is "createable".'
+ )
# We should have made 2 calls: 1 token call + 1 describe call
assert len(responses.calls) == 1 + 1
diff --git a/cumulusci/core/datasets.py b/cumulusci/core/datasets.py
index 7ebca75e13..89514fb825 100644
--- a/cumulusci/core/datasets.py
+++ b/cumulusci/core/datasets.py
@@ -92,9 +92,9 @@ def __exit__(self, *args, **kwargs):
self.schema_context.__exit__(*args, **kwargs) # type: ignore
def create(self):
- assert (
- self.initialized
- ), "You must open this context manager. e.g. `with Dataset() as dataset`"
+ assert self.initialized, (
+ "You must open this context manager. e.g. `with Dataset() as dataset`"
+ )
if not self.path.exists():
self.path.mkdir()
diff --git a/cumulusci/core/dependencies/dependencies.py b/cumulusci/core/dependencies/dependencies.py
index 2c0050dcba..8d17e7f2a5 100644
--- a/cumulusci/core/dependencies/dependencies.py
+++ b/cumulusci/core/dependencies/dependencies.py
@@ -56,9 +56,9 @@ def _validate_github_parameters(values):
# Populate the `github` property if not already populated.
if not values.get("github") and values.get("repo_name"):
- values[
- "github"
- ] = f"https://github.com/{values['repo_owner']}/{values['repo_name']}"
+ values["github"] = (
+ f"https://github.com/{values['repo_owner']}/{values['repo_name']}"
+ )
values.pop("repo_owner")
values.pop("repo_name")
@@ -67,12 +67,10 @@ def _validate_github_parameters(values):
class DependencyPin(HashableBaseModel, abc.ABC):
@abc.abstractmethod
- def can_pin(self, d: "DynamicDependency") -> bool:
- ...
+ def can_pin(self, d: "DynamicDependency") -> bool: ...
@abc.abstractmethod
- def pin(self, d: "DynamicDependency", context: BaseProjectConfig):
- ...
+ def pin(self, d: "DynamicDependency", context: BaseProjectConfig): ...
DependencyPin.update_forward_refs()
diff --git a/cumulusci/core/dependencies/resolvers.py b/cumulusci/core/dependencies/resolvers.py
index afa0d04e31..8dca8a902b 100644
--- a/cumulusci/core/dependencies/resolvers.py
+++ b/cumulusci/core/dependencies/resolvers.py
@@ -268,8 +268,7 @@ def get_branches(
self,
dep: BaseGitHubDependency,
context: BaseProjectConfig,
- ) -> List[Branch]:
- ...
+ ) -> List[Branch]: ...
def resolve(
self, dep: BaseGitHubDependency, context: BaseProjectConfig
@@ -317,7 +316,8 @@ def is_valid_repo_context(self, context: BaseProjectConfig) -> bool:
return bool(
super().is_valid_repo_context(context)
and is_release_branch_or_child(
- context.repo_branch, context.project__git__prefix_feature # type: ignore
+ context.repo_branch,
+ context.project__git__prefix_feature, # type: ignore
)
)
diff --git a/cumulusci/core/flowrunner.py b/cumulusci/core/flowrunner.py
index 098e84ba4d..1cf3756c7d 100644
--- a/cumulusci/core/flowrunner.py
+++ b/cumulusci/core/flowrunner.py
@@ -1,4 +1,4 @@
-""" FlowRunner contains the logic for actually running a flow.
+"""FlowRunner contains the logic for actually running a flow.
Flows are an integral part of CCI, they actually *do the thing*. We've been getting
along quite nicely with BaseFlow, which turns a flow definition into a callable
diff --git a/cumulusci/core/github.py b/cumulusci/core/github.py
index eb5732a805..40a3222208 100644
--- a/cumulusci/core/github.py
+++ b/cumulusci/core/github.py
@@ -189,7 +189,7 @@ def validate_service(options: dict, keychain) -> dict:
server_domain = options.get("server_domain", None)
gh = _determine_github_client(server_domain, {"token": token})
- if type(gh) == GitHubEnterprise:
+ if isinstance(gh, GitHubEnterprise):
validate_gh_enterprise(server_domain, keychain)
try:
authed_user = gh.me()
@@ -445,7 +445,7 @@ def get_version_id_from_tag(repo: Repository, tag_name: str) -> str:
def format_github3_exception(
- exc: Union[ResponseError, TransportError, ConnectionError]
+ exc: Union[ResponseError, TransportError, ConnectionError],
) -> str:
"""Checks github3 exceptions for the most common GitHub authentication
issues, returning a user-friendly message if found.
@@ -603,7 +603,7 @@ def catch_common_github_auth_errors(func: Callable) -> Callable:
def inner(*args, **kwargs):
try:
return func(*args, **kwargs)
- except (ConnectionError) as exc:
+ except ConnectionError as exc:
if error_msg := format_github3_exception(exc):
raise GithubApiError(error_msg) from exc
else:
diff --git a/cumulusci/core/keychain/base_project_keychain.py b/cumulusci/core/keychain/base_project_keychain.py
index 0c1d0a6763..9bd5e647ec 100644
--- a/cumulusci/core/keychain/base_project_keychain.py
+++ b/cumulusci/core/keychain/base_project_keychain.py
@@ -74,9 +74,9 @@ def create_scratch_org(
scratch_config.setdefault("namespaced", False)
scratch_config["config_name"] = config_name
- scratch_config[
- "sfdx_alias"
- ] = f"{self.project_config.project__name}__{org_name}"
+ scratch_config["sfdx_alias"] = (
+ f"{self.project_config.project__name}__{org_name}"
+ )
org_config = ScratchOrgConfig(
scratch_config, org_name, keychain=self, global_org=False
)
@@ -353,9 +353,9 @@ def _load_default_connected_app(self):
"""Load the default connected app as a first class service on the keychain."""
if "connected_app" not in self.config["services"]:
self.config["services"]["connected_app"] = {}
- self.config["services"]["connected_app"][
- DEFAULT_CONNECTED_APP_NAME
- ] = DEFAULT_CONNECTED_APP
+ self.config["services"]["connected_app"][DEFAULT_CONNECTED_APP_NAME] = (
+ DEFAULT_CONNECTED_APP
+ )
def _set_service(
self, service_type, alias, service_config, save=True, config_encrypted=False
diff --git a/cumulusci/core/keychain/serialization.py b/cumulusci/core/keychain/serialization.py
index 538ed8b73d..51ab2ad2c5 100644
--- a/cumulusci/core/keychain/serialization.py
+++ b/cumulusci/core/keychain/serialization.py
@@ -128,9 +128,9 @@ def check_round_trip(data: dict, logger: Logger) -> Optional[bytes]:
return None
try:
test_load = load_config_from_json_or_pickle(as_json_text)
- assert _simplify_config(test_load) == _simplify_config(
- data
- ), f"JSON did not round-trip-cleanly {test_load}, {data}"
+ assert _simplify_config(test_load) == _simplify_config(data), (
+ f"JSON did not round-trip-cleanly {test_load}, {data}"
+ )
except Exception as e: # pragma: no cover
report_error("CumulusCI found a problem saving your config:", e, logger)
return None
diff --git a/cumulusci/core/keychain/tests/test_encrypted_file_project_keychain.py b/cumulusci/core/keychain/tests/test_encrypted_file_project_keychain.py
index c761ed6bef..54b808e4ca 100644
--- a/cumulusci/core/keychain/tests/test_encrypted_file_project_keychain.py
+++ b/cumulusci/core/keychain/tests/test_encrypted_file_project_keychain.py
@@ -67,7 +67,6 @@ def keychain(project_config, key) -> EncryptedFileProjectKeychain:
class TestEncryptedFileProjectKeychain:
-
project_name = "TestProject"
def _write_file(self, filepath, contents):
@@ -138,7 +137,6 @@ def test_set_org__should_not_save_when_environment_project_keychain_set(
self, keychain, org_config, withdifferentformats
):
with temporary_dir() as temp:
- env = EnvironmentVarGuard()
with EnvironmentVarGuard() as env:
env.set("CUMULUSCI_KEYCHAIN_CLASS", "EnvironmentProjectKeychain")
with mock.patch.object(
@@ -281,7 +279,6 @@ def test_get_default_org__outside_project(self, keychain):
def test_load_orgs_from_environment(self, keychain, org_config):
scratch_config = org_config.config.copy()
scratch_config["scratch"] = True
- env = EnvironmentVarGuard()
with EnvironmentVarGuard() as env:
env.set(
f"{keychain.env_org_var_prefix}dev",
@@ -299,7 +296,6 @@ def test_load_orgs_from_environment(self, keychain, org_config):
assert _simplify_config(actual_config.config) == org_config.config
def test_load_orgs_from_environment__empty_throws_error(self, keychain, org_config):
- env = EnvironmentVarGuard()
with EnvironmentVarGuard() as env:
env.set(
f"{keychain.env_org_var_prefix}dev",
@@ -311,7 +307,6 @@ def test_load_orgs_from_environment__empty_throws_error(self, keychain, org_conf
def test_load_orgs_from_environment__invalid_json_throws_error(
self, keychain, org_config
):
- env = EnvironmentVarGuard()
with EnvironmentVarGuard() as env:
env.set(
f"{keychain.env_org_var_prefix}dev",
@@ -384,7 +379,6 @@ def test_load_services_from_env__same_name_throws_error(self, keychain):
def test_load_services_from_env__empty_throws_error(self, keychain):
service_prefix = EncryptedFileProjectKeychain.env_service_var_prefix
- env = EnvironmentVarGuard()
with EnvironmentVarGuard() as env:
env.set(
f"{service_prefix}github",
@@ -395,7 +389,6 @@ def test_load_services_from_env__empty_throws_error(self, keychain):
def test_load_services_from_env__invalid_json_throws_error(self, keychain):
service_prefix = EncryptedFileProjectKeychain.env_service_var_prefix
- env = EnvironmentVarGuard()
with EnvironmentVarGuard() as env:
env.set(
f"{service_prefix}github",
diff --git a/cumulusci/core/source_transforms/tests/test_transforms.py b/cumulusci/core/source_transforms/tests/test_transforms.py
index 619e9a125e..1cd3ddc90f 100644
--- a/cumulusci/core/source_transforms/tests/test_transforms.py
+++ b/cumulusci/core/source_transforms/tests/test_transforms.py
@@ -152,7 +152,7 @@ def test_namespace_injection_ignores_binary(task_context):
ZipFileSpec(
{
Path("ns__Foo.cls"): "System.debug('ns__blah');",
- Path("b.staticResource"): b"ns__\xFF\xFF",
+ Path("b.staticResource"): b"ns__\xff\xff",
}
).as_zipfile(),
options={"namespace_tokenize": "ns", "unmanaged": False},
@@ -162,7 +162,7 @@ def test_namespace_injection_ignores_binary(task_context):
ZipFileSpec(
{
Path("___NAMESPACE___Foo.cls"): "System.debug('%%%NAMESPACE%%%blah');",
- Path("b.staticResource"): b"ns__\xFF\xFF",
+ Path("b.staticResource"): b"ns__\xff\xff",
}
)
== builder.zf
diff --git a/cumulusci/core/source_transforms/transforms.py b/cumulusci/core/source_transforms/transforms.py
index 9bf0499a8e..1ec5f2d28f 100644
--- a/cumulusci/core/source_transforms/transforms.py
+++ b/cumulusci/core/source_transforms/transforms.py
@@ -34,12 +34,10 @@ class SourceTransform(abc.ABC):
options_model: T.Optional[T.Type[BaseModel]]
identifier: str
- def __init__(self):
- ...
+ def __init__(self): ...
@abc.abstractmethod
- def process(self, zf: ZipFile, context: TaskContext) -> ZipFile:
- ...
+ def process(self, zf: ZipFile, context: TaskContext) -> ZipFile: ...
class SourceTransformSpec(BaseModel):
@@ -314,8 +312,7 @@ def validate_find_xpath(cls, values):
return values
@abc.abstractmethod
- def get_replace_string(self, context: TaskContext) -> str:
- ...
+ def get_replace_string(self, context: TaskContext) -> str: ...
class FindReplaceSpec(FindReplaceBaseSpec):
diff --git a/cumulusci/core/tasks.py b/cumulusci/core/tasks.py
index 7c8cffc2fb..5d7b4c6065 100644
--- a/cumulusci/core/tasks.py
+++ b/cumulusci/core/tasks.py
@@ -1,7 +1,8 @@
-""" Tasks are the basic unit of execution in CumulusCI.
+"""Tasks are the basic unit of execution in CumulusCI.
Subclass BaseTask or a descendant to define custom task logic
"""
+
import contextlib
import logging
import os
diff --git a/cumulusci/core/tests/fake_remote_repo/tasks/example.py b/cumulusci/core/tests/fake_remote_repo/tasks/example.py
index 48b0bdc685..b70e688050 100644
--- a/cumulusci/core/tests/fake_remote_repo/tasks/example.py
+++ b/cumulusci/core/tests/fake_remote_repo/tasks/example.py
@@ -8,7 +8,6 @@ def _run_task(self):
class StaticPreflightTask(BaseTask):
-
task_options = {
"task_name": {
"description": "Task that this preflight is for",
@@ -35,7 +34,7 @@ class StaticSleep(Sleep):
"description": "Task that this preflight is for",
"required": True,
},
- }
+ },
)
def _run_task(self):
diff --git a/cumulusci/core/tests/test_datasets_e2e.py b/cumulusci/core/tests/test_datasets_e2e.py
index 387ad696ad..6b5399d9ec 100644
--- a/cumulusci/core/tests/test_datasets_e2e.py
+++ b/cumulusci/core/tests/test_datasets_e2e.py
@@ -33,14 +33,17 @@ def setup_test(org_config):
describe_for("Contact"),
describe_for("Opportunity"),
)
- with patch.object(type(org_config), "is_person_accounts_enabled", False), patch(
- "cumulusci.core.datasets.get_org_schema",
- lambda _sf, org_config, **kwargs: _fake_get_org_schema(
- org_config,
- obj_describes,
- object_counts,
- included_objects=["Account", "Contact", "Opportunity"],
- **kwargs,
+ with (
+ patch.object(type(org_config), "is_person_accounts_enabled", False),
+ patch(
+ "cumulusci.core.datasets.get_org_schema",
+ lambda _sf, org_config, **kwargs: _fake_get_org_schema(
+ org_config,
+ obj_describes,
+ object_counts,
+ included_objects=["Account", "Contact", "Opportunity"],
+ **kwargs,
+ ),
),
):
yield
@@ -76,20 +79,19 @@ def test_datasets_e2e(
describe_for("Opportunity"),
)
- with patch.object(
- type(org_config), "is_person_accounts_enabled", False
- ), _fake_get_org_schema(
- org_config,
- obj_describes,
- object_counts,
- include_counts=True,
- filters=[Filters.extractable, Filters.createable],
- included_objects=["Account", "Contact", "Opportunity"],
- ) as schema, ensure_accounts(
- 6
- ), Dataset(
- "foo", project_config, sf, org_config, schema=schema
- ) as dataset:
+ with (
+ patch.object(type(org_config), "is_person_accounts_enabled", False),
+ _fake_get_org_schema(
+ org_config,
+ obj_describes,
+ object_counts,
+ include_counts=True,
+ filters=[Filters.extractable, Filters.createable],
+ included_objects=["Account", "Contact", "Opportunity"],
+ ) as schema,
+ ensure_accounts(6),
+ Dataset("foo", project_config, sf, org_config, schema=schema) as dataset,
+ ):
timer.checkpoint("In Dataset")
if dataset.path.exists():
rmtree(dataset.path)
@@ -183,18 +185,21 @@ def test_datasets_extract_standard_objects(
# Need record types for the RecordTypeId field to be in the org
create_record_type_for_account(sf, run_code_without_recording)
- with patch.object(type(org_config), "is_person_accounts_enabled", False), patch(
- "cumulusci.core.datasets.get_org_schema",
- lambda _sf, org_config, **kwargs: _fake_get_org_schema(
- org_config,
- obj_describes,
- object_counts,
- included_objects=["Account", "Contact", "Opportunity"],
- **kwargs,
+ with (
+ patch.object(type(org_config), "is_person_accounts_enabled", False),
+ patch(
+ "cumulusci.core.datasets.get_org_schema",
+ lambda _sf, org_config, **kwargs: _fake_get_org_schema(
+ org_config,
+ obj_describes,
+ object_counts,
+ included_objects=["Account", "Contact", "Opportunity"],
+ **kwargs,
+ ),
),
- ), ensure_accounts(6), Dataset(
- "bar", project_config, sf, org_config
- ) as dataset:
+ ensure_accounts(6),
+ Dataset("bar", project_config, sf, org_config) as dataset,
+ ):
timer.checkpoint("In Dataset")
if dataset.path.exists():
rmtree(dataset.path)
@@ -240,18 +245,21 @@ def test_datasets_read_explicit_extract_declaration(
describe_for("Lead"),
describe_for("Event"),
)
- with patch.object(type(org_config), "is_person_accounts_enabled", False), patch(
- "cumulusci.core.datasets.get_org_schema",
- lambda _sf, org_config, **kwargs: _fake_get_org_schema(
- org_config,
- obj_describes,
- object_counts,
- included_objects=["Account", "Contact", "Opportunity"],
- **kwargs,
+ with (
+ patch.object(type(org_config), "is_person_accounts_enabled", False),
+ patch(
+ "cumulusci.core.datasets.get_org_schema",
+ lambda _sf, org_config, **kwargs: _fake_get_org_schema(
+ org_config,
+ obj_describes,
+ object_counts,
+ included_objects=["Account", "Contact", "Opportunity"],
+ **kwargs,
+ ),
),
- ), ensure_accounts(6), Dataset(
- "bar", project_config, sf, org_config
- ) as dataset:
+ ensure_accounts(6),
+ Dataset("bar", project_config, sf, org_config) as dataset,
+ ):
if dataset.path.exists():
rmtree(dataset.path)
@@ -350,21 +358,24 @@ def fake_run_snowfakery(self):
assert "foo.recipe.yml" in self.options["recipe"]
called = True
- with setup_test(org_config), Dataset(
- "foo", project_config, sf, org_config, schema=None
- ) as dataset, patch(
- "cumulusci.core.datasets.Path.exists", fake_path_exists
- ), patch(
- "cumulusci.tasks.bulkdata.snowfakery.Snowfakery._run_task",
- fake_run_snowfakery,
+ with (
+ setup_test(org_config),
+ Dataset("foo", project_config, sf, org_config, schema=None) as dataset,
+ patch("cumulusci.core.datasets.Path.exists", fake_path_exists),
+ patch(
+ "cumulusci.tasks.bulkdata.snowfakery.Snowfakery._run_task",
+ fake_run_snowfakery,
+ ),
):
dataset.load()
assert called
def test_dataset_with_no_data_or_recipe(self, sf, project_config, org_config):
- with setup_test(org_config), Dataset(
- "fxoyoxz", project_config, sf, org_config, schema=None
- ) as dataset, pytest.raises(BulkDataException, match="fxoyoxz"):
+ with (
+ setup_test(org_config),
+ Dataset("fxoyoxz", project_config, sf, org_config, schema=None) as dataset,
+ pytest.raises(BulkDataException, match="fxoyoxz"),
+ ):
dataset.load()
diff --git a/cumulusci/core/tests/test_flowrunner.py b/cumulusci/core/tests/test_flowrunner.py
index 40c9257ffd..6b99a5cac2 100644
--- a/cumulusci/core/tests/test_flowrunner.py
+++ b/cumulusci/core/tests/test_flowrunner.py
@@ -780,9 +780,10 @@ def include_fake_project(self: BaseProjectConfig, _spec) -> BaseProjectConfig:
def test_cross_project_tasks(get_tempfile_logger):
# get_tempfile_logger doesn't clean up after itself which breaks other tests
get_tempfile_logger.return_value = mock.Mock(), ""
- with mock.patch("cumulusci.core.debug._DEBUG_MODE", get=lambda: True), mock.patch(
- "logging.Logger.info", wraps=lambda data: print(data)
- ) as out:
+ with (
+ mock.patch("cumulusci.core.debug._DEBUG_MODE", get=lambda: True),
+ mock.patch("logging.Logger.info", wraps=lambda data: print(data)) as out,
+ ):
cci.main(
[
"cci",
diff --git a/cumulusci/core/tests/test_github.py b/cumulusci/core/tests/test_github.py
index 23976bf0c8..64203cc92e 100644
--- a/cumulusci/core/tests/test_github.py
+++ b/cumulusci/core/tests/test_github.py
@@ -280,7 +280,7 @@ def test_get_auth_from_service(self, keychain_enterprise):
)
def test_determine_github_client(self, domain, client):
client_result = _determine_github_client(domain, {})
- assert type(client_result) == client
+ assert isinstance(client_result, client)
@responses.activate
def test_get_pull_requests_by_head(self, mock_util, repo):
diff --git a/cumulusci/core/tests/test_tasks.py b/cumulusci/core/tests/test_tasks.py
index 38535a8c46..e4c1a7fd61 100644
--- a/cumulusci/core/tests/test_tasks.py
+++ b/cumulusci/core/tests/test_tasks.py
@@ -1,4 +1,4 @@
-""" Tests for the CumulusCI task module """
+"""Tests for the CumulusCI task module"""
import collections
import logging
diff --git a/cumulusci/core/tests/utils.py b/cumulusci/core/tests/utils.py
index 0a0dc55ace..7a08654f76 100644
--- a/cumulusci/core/tests/utils.py
+++ b/cumulusci/core/tests/utils.py
@@ -1,4 +1,4 @@
-""" Utilities for testing CumulusCI
+"""Utilities for testing CumulusCI
MockLoggingHandler: a logging handler that we can assert"""
@@ -47,7 +47,6 @@ def reset(self):
class EnvironmentVarGuard(collections.abc.MutableMapping):
-
"""Class to help protect the environment variable properly. Can be used as
a context manager."""
@@ -90,7 +89,7 @@ def __enter__(self):
return self
def __exit__(self, *ignore_exc):
- for (k, v) in self._changed.items():
+ for k, v in self._changed.items():
if v is None:
if k in self._environ:
del self._environ[k]
diff --git a/cumulusci/core/utils.py b/cumulusci/core/utils.py
index 88cd570657..4745a18528 100644
--- a/cumulusci/core/utils.py
+++ b/cumulusci/core/utils.py
@@ -1,4 +1,4 @@
-""" Utilities for CumulusCI Core"""
+"""Utilities for CumulusCI Core"""
import copy
import glob
diff --git a/cumulusci/oauth/client.py b/cumulusci/oauth/client.py
index 91059eba15..2f6fea8113 100644
--- a/cumulusci/oauth/client.py
+++ b/cumulusci/oauth/client.py
@@ -201,8 +201,7 @@ def _create_httpd(self):
keyfile = "key.pem"
if not Path(certfile).is_file() or not Path(keyfile).is_file():
create_key_and_self_signed_cert()
- # FIXME: Use ssl.PROTOCOL_TLS_SERVER after dropping 3.8 support
- ssl_context = ssl.SSLContext(protocol=ssl.PROTOCOL_TLS)
+ ssl_context = ssl.SSLContext(protocol=ssl.PROTOCOL_TLS_SERVER)
ssl_context.load_cert_chain(certfile, keyfile)
httpd.socket = ssl_context.wrap_socket(
httpd.socket,
diff --git a/cumulusci/oauth/tests/test_client.py b/cumulusci/oauth/tests/test_client.py
index 430b18e9ab..c342fdc275 100644
--- a/cumulusci/oauth/tests/test_client.py
+++ b/cumulusci/oauth/tests/test_client.py
@@ -92,9 +92,9 @@ def run_code_and_check_exception():
break
time.sleep(0.01)
- assert (
- oauth_client.httpd
- ), "HTTPD did not start. Perhaps port 8080 cannot be accessed."
+ assert oauth_client.httpd, (
+ "HTTPD did not start. Perhaps port 8080 cannot be accessed."
+ )
try:
yield oauth_client
diff --git a/cumulusci/robotframework/SalesforceAPI.py b/cumulusci/robotframework/SalesforceAPI.py
index 588880182e..a257a1f7d9 100644
--- a/cumulusci/robotframework/SalesforceAPI.py
+++ b/cumulusci/robotframework/SalesforceAPI.py
@@ -211,9 +211,9 @@ def salesforce_collection_insert(self, objects):
| Salesforce Collection Insert ${objects}
"""
- assert (
- not obj.get("id", None) for obj in objects
- ), "Insertable objects should not have IDs"
+ assert (not obj.get("id", None) for obj in objects), (
+ "Insertable objects should not have IDs"
+ )
assert len(objects) <= SF_COLLECTION_INSERTION_LIMIT, (
"Cannot insert more than %s objects with this keyword"
% SF_COLLECTION_INSERTION_LIMIT
@@ -261,9 +261,9 @@ def salesforce_collection_update(self, objects):
"""
for obj in objects:
- assert obj[
- "id"
- ], "Should be a list of objects with Ids returned by Salesforce Collection Insert"
+ assert obj["id"], (
+ "Should be a list of objects with Ids returned by Salesforce Collection Insert"
+ )
if STATUS_KEY in obj:
del obj[STATUS_KEY]
diff --git a/cumulusci/robotframework/SalesforcePlaywright.py b/cumulusci/robotframework/SalesforcePlaywright.py
index 9eba2edf11..2302d1513b 100644
--- a/cumulusci/robotframework/SalesforcePlaywright.py
+++ b/cumulusci/robotframework/SalesforcePlaywright.py
@@ -207,7 +207,7 @@ def _check_for_classic(self):
self.browser.click("a.switch-to-lightning")
return True
- except (AssertionError):
+ except AssertionError:
return False
def breakpoint(self):
diff --git a/cumulusci/robotframework/locator_manager.py b/cumulusci/robotframework/locator_manager.py
index 8a0de6eafb..e3a0ae8a58 100644
--- a/cumulusci/robotframework/locator_manager.py
+++ b/cumulusci/robotframework/locator_manager.py
@@ -69,7 +69,7 @@ def add_location_strategies():
# exists, so we use a flag to make sure this code is called
# only once.
selenium = BuiltIn().get_library_instance("SeleniumLibrary")
- for (prefix, strategy) in LOCATORS.items():
+ for prefix, strategy in LOCATORS.items():
try:
logger.debug(f"adding location strategy for '{prefix}'")
selenium.add_location_strategy(
diff --git a/cumulusci/robotframework/locators_56.py b/cumulusci/robotframework/locators_56.py
index e6dd19ba8d..4060cf0d33 100644
--- a/cumulusci/robotframework/locators_56.py
+++ b/cumulusci/robotframework/locators_56.py
@@ -16,7 +16,7 @@
"div.desktop.container.oneOne.oneAppLayoutHost[data-aura-rendered-by]",
"list_view_menu": {
"button": "css:button[title='List View Controls']",
- "item": "//div[@title='List View " "Controls']//ul[@role='menu']//li/a[.='{}']",
+ "item": "//div[@title='List View Controls']//ul[@role='menu']//li/a[.='{}']",
},
"loading_box": "css: div.auraLoadingBox.oneLoadingBox",
"modal": {
@@ -27,8 +27,7 @@
"field_alert": "//div[contains(@class, 'forceFormPageError')]",
"has_error": "css: div.forceFormPageError",
"is_open": "css: div.uiModal div.panel.slds-modal",
- "review_alert": "//a[@records-recordediterror_recordediterror "
- "and text()='{}']",
+ "review_alert": "//a[@records-recordediterror_recordediterror and text()='{}']",
},
"object": {
"button": "//div[contains(@class, "
diff --git a/cumulusci/robotframework/pageobjects/ObjectManagerPageObject.py b/cumulusci/robotframework/pageobjects/ObjectManagerPageObject.py
index c0fc46a49a..a355e23d4f 100644
--- a/cumulusci/robotframework/pageobjects/ObjectManagerPageObject.py
+++ b/cumulusci/robotframework/pageobjects/ObjectManagerPageObject.py
@@ -203,7 +203,7 @@ def delete_custom_field(self, field_name):
except Exception as e:
self.builtin.log(
- f"on try #{tries+1} we caught this error: {e}", "DEBUG"
+ f"on try #{tries + 1} we caught this error: {e}", "DEBUG"
)
self.builtin.sleep("1 second")
last_error = e
diff --git a/cumulusci/robotframework/tests/CustomObjectTestPage.py b/cumulusci/robotframework/tests/CustomObjectTestPage.py
index d91903929b..5bd54b6965 100644
--- a/cumulusci/robotframework/tests/CustomObjectTestPage.py
+++ b/cumulusci/robotframework/tests/CustomObjectTestPage.py
@@ -1,6 +1,7 @@
"""
This class is used by test_pageobjects
"""
+
from cumulusci.robotframework.pageobjects import ListingPage, pageobject
diff --git a/cumulusci/robotframework/tests/salesforce/TestLibraryA.py b/cumulusci/robotframework/tests/salesforce/TestLibraryA.py
index bbbda7aeeb..6f42d3023d 100644
--- a/cumulusci/robotframework/tests/salesforce/TestLibraryA.py
+++ b/cumulusci/robotframework/tests/salesforce/TestLibraryA.py
@@ -2,6 +2,7 @@
This is a library used by locators.robot for testing
custom locator strategies
"""
+
from cumulusci.robotframework.locator_manager import (
register_locators,
translate_locator,
diff --git a/cumulusci/robotframework/tests/salesforce/TestLibraryB.py b/cumulusci/robotframework/tests/salesforce/TestLibraryB.py
index fcea759f1f..271411706b 100644
--- a/cumulusci/robotframework/tests/salesforce/TestLibraryB.py
+++ b/cumulusci/robotframework/tests/salesforce/TestLibraryB.py
@@ -2,6 +2,7 @@
This is a library used by locators.robot for testing
custom locator strategies
"""
+
from cumulusci.robotframework.locator_manager import (
register_locators,
translate_locator,
diff --git a/cumulusci/robotframework/tests/salesforce/TestListener.py b/cumulusci/robotframework/tests/salesforce/TestListener.py
index 0894da4ceb..7c69e7c689 100644
--- a/cumulusci/robotframework/tests/salesforce/TestListener.py
+++ b/cumulusci/robotframework/tests/salesforce/TestListener.py
@@ -1,15 +1,16 @@
"""This hybrid library/listener can be used to verify messages that
- have been logged and keywords have been called.
+have been logged and keywords have been called.
- This works by listening for log messages and keywords via the
- listener interface, and saving them in a cache. Keywords are
- provided for doing assertions on called keywords and for resetting
- the cache.
+This works by listening for log messages and keywords via the
+listener interface, and saving them in a cache. Keywords are
+provided for doing assertions on called keywords and for resetting
+the cache.
- The keyword cache is reset for each test case to help keep it
- from growing too large.
+The keyword cache is reset for each test case to help keep it
+from growing too large.
"""
+
import re
diff --git a/cumulusci/robotframework/tests/salesforce/labels.html b/cumulusci/robotframework/tests/salesforce/labels.html
index c1b8bde22e..8c9190df91 100644
--- a/cumulusci/robotframework/tests/salesforce/labels.html
+++ b/cumulusci/robotframework/tests/salesforce/labels.html
@@ -1,4 +1,4 @@
-
+
Labels for testing
diff --git a/cumulusci/robotframework/tests/test_cumulusci_library.py b/cumulusci/robotframework/tests/test_cumulusci_library.py
index ff17b2b190..51639a979a 100644
--- a/cumulusci/robotframework/tests/test_cumulusci_library.py
+++ b/cumulusci/robotframework/tests/test_cumulusci_library.py
@@ -72,9 +72,9 @@ def test_robot_logger_supports_warning(self):
self.cumulusci.run_task("get_pwd")
args, kwargs = self.cumulusci._run_task.call_args
task = args[0]
- assert hasattr(
- task.logger, "warning"
- ), "robot logger should have a warning method but doesn't"
+ assert hasattr(task.logger, "warning"), (
+ "robot logger should have a warning method but doesn't"
+ )
def test_robot_logger_supports_log(self):
"""Verify that 'run task' uses a logger that supports .log()
@@ -89,7 +89,6 @@ def test_robot_logger_supports_log(self):
args, kwargs = self.cumulusci._run_task.call_args
task = args[0]
with mock.patch.object(task.logger, "write") as logger_write:
-
task.logger.log(logging.CRITICAL, "a critical message")
task.logger.log(logging.ERROR, "an error message")
task.logger.log(logging.WARN, "a warning message")
@@ -269,7 +268,6 @@ def test_login_url_user_org_with_get_access_token(self):
with mock.patch.object(
self.cumulusci.org, "get_access_token", return_value="super-secret-token"
):
-
url = self.cumulusci.login_url(username="test@example.com")
self.cumulusci.org.get_access_token.assert_called_once_with(
username="test@example.com"
diff --git a/cumulusci/robotframework/tests/test_pageobjects.py b/cumulusci/robotframework/tests/test_pageobjects.py
index 3f522d1467..ae6cb47f5a 100644
--- a/cumulusci/robotframework/tests/test_pageobjects.py
+++ b/cumulusci/robotframework/tests/test_pageobjects.py
@@ -182,7 +182,6 @@ def test_namespaced_object_name(self, get_context_mock, get_library_instance_moc
CumulusCI, "get_namespace_prefix", return_value="foobar__"
):
with reload_PageObjects(FOO_PATH) as po:
-
FooTestPage = importer.import_class_or_module_by_path(FOO_PATH)
MockGetLibraryInstance.libs["FooTestPage"] = _PageObjectLibrary(
FooTestPage()
@@ -197,7 +196,6 @@ def test_non_namespaced_object_name(
"""Verify that the object name is not prefixed by a namespace when there is no namespace"""
with mock.patch.object(CumulusCI, "get_namespace_prefix", return_value=""):
with reload_PageObjects(FOO_PATH) as po:
-
FooTestPage = importer.import_class_or_module_by_path(FOO_PATH)
MockGetLibraryInstance.libs["FooTestPage"] = _PageObjectLibrary(
FooTestPage()
diff --git a/cumulusci/robotframework/tests/test_salesforce_locators.py b/cumulusci/robotframework/tests/test_salesforce_locators.py
index 111c1e1173..0a706b7fe0 100644
--- a/cumulusci/robotframework/tests/test_salesforce_locators.py
+++ b/cumulusci/robotframework/tests/test_salesforce_locators.py
@@ -66,8 +66,8 @@ def test_locators_57(self):
keys_56 = set(locators_56.lex_locators)
keys_57 = set(locators_57.lex_locators)
- assert id(locators_56.lex_locators) != id(
- locators_57.lex_locators
- ), "locators_56.lex_locators and locators_57.lex_locators are the same object"
+ assert id(locators_56.lex_locators) != id(locators_57.lex_locators), (
+ "locators_56.lex_locators and locators_57.lex_locators are the same object"
+ )
assert len(keys_56) > 0
assert keys_57.issubset(keys_56)
diff --git a/cumulusci/robotframework/utils.py b/cumulusci/robotframework/utils.py
index 299545d1d0..513ef6b7a2 100644
--- a/cumulusci/robotframework/utils.py
+++ b/cumulusci/robotframework/utils.py
@@ -72,7 +72,6 @@ def set_pdb_trace(pm=False): # pragma: no cover
class RetryingSeleniumLibraryMixin(object):
-
debug = False
@property
diff --git a/cumulusci/salesforce_api/org_schema.py b/cumulusci/salesforce_api/org_schema.py
index 2dd27a2844..e10f9f2f17 100644
--- a/cumulusci/salesforce_api/org_schema.py
+++ b/cumulusci/salesforce_api/org_schema.py
@@ -151,6 +151,7 @@ def get(self, name: str):
def block_writing(self):
"""After this method is called, the database can't be updated again"""
+
# changes don't get saved back to the gzip
# so there is no point writing to the DB
def closed():
@@ -232,8 +233,7 @@ def _populate_cache_from_describe(self, describe_objs: List["DescribeUpdate"]):
metadata.reflect()
with BufferedSession(engine, metadata) as sess:
-
- for (sobj_data, last_modified) in describe_objs:
+ for sobj_data, last_modified in describe_objs:
sobj_data = sobj_data.copy()
fields = sobj_data.pop("fields")
sobj_data["last_modified_date"] = last_modified
diff --git a/cumulusci/salesforce_api/rest_deploy.py b/cumulusci/salesforce_api/rest_deploy.py
index 70d532569a..9276f42db2 100644
--- a/cumulusci/salesforce_api/rest_deploy.py
+++ b/cumulusci/salesforce_api/rest_deploy.py
@@ -138,7 +138,7 @@ def _reformat_zip(self, package_zip):
# Construct an error message from deployment failure details
def _construct_error_message(self, failure):
- error_message = f"{str.upper(failure['problemType'])} in file {failure['fileName'][len(PARENT_DIR_NAME)+len('/'):]}: {failure['problem']}"
+ error_message = f"{str.upper(failure['problemType'])} in file {failure['fileName'][len(PARENT_DIR_NAME) + len('/') :]}: {failure['problem']}"
if failure["lineNumber"] and failure["columnNumber"]:
error_message += (
diff --git a/cumulusci/salesforce_api/retrieve_profile_api.py b/cumulusci/salesforce_api/retrieve_profile_api.py
index 72aa2c963f..3ffda70473 100644
--- a/cumulusci/salesforce_api/retrieve_profile_api.py
+++ b/cumulusci/salesforce_api/retrieve_profile_api.py
@@ -266,7 +266,7 @@ def _process_setupEntityAccess_results(self, result_list: List[dict]):
and item["NamespacePrefix"] is not None
):
extracted_values[data.package_xml_name].append(
- f'{item["NamespacePrefix"]}__{item[data.columns[0]]}'
+ f"{item['NamespacePrefix']}__{item[data.columns[0]]}"
)
else:
extracted_values[data.package_xml_name].append(
diff --git a/cumulusci/salesforce_api/tests/test_package_zip.py b/cumulusci/salesforce_api/tests/test_package_zip.py
index fe4a85bbd3..fa27b78b5b 100644
--- a/cumulusci/salesforce_api/tests/test_package_zip.py
+++ b/cumulusci/salesforce_api/tests/test_package_zip.py
@@ -37,7 +37,6 @@ def test_as_hash(self):
class TestMetadataPackageZipBuilder:
def test_builder(self, task_context):
with temporary_dir() as path:
-
# add package.xml
with open(os.path.join(path, "package.xml"), "w") as f:
f.write(
diff --git a/cumulusci/salesforce_api/tests/test_retrieve_profile_api.py b/cumulusci/salesforce_api/tests/test_retrieve_profile_api.py
index 99cc67eef3..bd16bfcf6b 100644
--- a/cumulusci/salesforce_api/tests/test_retrieve_profile_api.py
+++ b/cumulusci/salesforce_api/tests/test_retrieve_profile_api.py
@@ -38,9 +38,10 @@ def test_init_task(retrieve_profile_api_instance):
def test_retrieve_existing_profiles(retrieve_profile_api_instance):
profiles = ["Profile1", "Profile2", "Admin"]
result = {"records": [{"Name": "Profile1"}]}
- with patch.object(
- RetrieveProfileApi, "_build_query", return_value="some_query"
- ), patch.object(RetrieveProfileApi, "_run_query", return_value=result):
+ with (
+ patch.object(RetrieveProfileApi, "_build_query", return_value="some_query"),
+ patch.object(RetrieveProfileApi, "_run_query", return_value=result),
+ ):
existing_profiles = retrieve_profile_api_instance._retrieve_existing_profiles(
profiles
)
@@ -174,14 +175,18 @@ def test_process_setupEntityAccess_results(retrieve_profile_api_instance):
"ApexPage": [{"Id": "002def", "Name": "TestApexPage"}],
"CustomPermission": [],
}
- with patch.object(
- RetrieveProfileApi, "_build_query", return_value="SELECT Id, Name FROM Table"
- ) as mock_build_query, patch.object(
- RunParallelQueries,
- "_run_queries_in_parallel",
- return_value=queries_result,
- ) as mock_run_queries:
-
+ with (
+ patch.object(
+ RetrieveProfileApi,
+ "_build_query",
+ return_value="SELECT Id, Name FROM Table",
+ ) as mock_build_query,
+ patch.object(
+ RunParallelQueries,
+ "_run_queries_in_parallel",
+ return_value=queries_result,
+ ) as mock_run_queries,
+ ):
(
entities,
result,
@@ -222,29 +227,34 @@ def test_process_all_results(retrieve_profile_api_instance):
"customTab": "some_result",
"profileFlow": "some_result",
}
- with patch.object(
- RetrieveProfileApi,
- "_process_setupEntityAccess_results",
- return_value=(
- {
- "ApexClass": ["TestApexClass"],
- "ApexPage": ["TestApexPage"],
- "FlowDefinition": ["TestFlow"],
- },
- {"FlowDefinition": ["some_result"]},
+ with (
+ patch.object(
+ RetrieveProfileApi,
+ "_process_setupEntityAccess_results",
+ return_value=(
+ {
+ "ApexClass": ["TestApexClass"],
+ "ApexPage": ["TestApexPage"],
+ "FlowDefinition": ["TestFlow"],
+ },
+ {"FlowDefinition": ["some_result"]},
+ ),
+ ),
+ patch.object(
+ RetrieveProfileApi,
+ "_process_sObject_results",
+ return_value={"CustomObject": ["TestObject"]},
+ ),
+ patch.object(
+ RetrieveProfileApi,
+ "_process_customTab_results",
+ return_value={"CustomTab": ["TestTab"]},
+ ),
+ patch.object(
+ RetrieveProfileApi,
+ "_match_profiles_and_flows",
+ return_value={"Profile1": ["Flow1"]},
),
- ), patch.object(
- RetrieveProfileApi,
- "_process_sObject_results",
- return_value={"CustomObject": ["TestObject"]},
- ), patch.object(
- RetrieveProfileApi,
- "_process_customTab_results",
- return_value={"CustomTab": ["TestTab"]},
- ), patch.object(
- RetrieveProfileApi,
- "_match_profiles_and_flows",
- return_value={"Profile1": ["Flow1"]},
):
entities, profile_flow = retrieve_profile_api_instance._process_all_results(
result_dict
@@ -291,16 +301,19 @@ def test_retrieve_permissionable_entities(retrieve_profile_api_instance):
{"Profile1": ["Flow1"]},
)
- with patch.object(
- RunParallelQueries, "_run_queries_in_parallel"
- ) as mock_run_queries, patch.object(
- RetrieveProfileApi,
- "_queries_retrieve_permissions",
- return_value=expected_queries,
- ), patch.object(
- RetrieveProfileApi, "_process_all_results", return_value=expected_result
+ with (
+ patch.object(
+ RunParallelQueries, "_run_queries_in_parallel"
+ ) as mock_run_queries,
+ patch.object(
+ RetrieveProfileApi,
+ "_queries_retrieve_permissions",
+ return_value=expected_queries,
+ ),
+ patch.object(
+ RetrieveProfileApi, "_process_all_results", return_value=expected_result
+ ),
):
-
result = retrieve_profile_api_instance._retrieve_permissionable_entities(
profiles
)
diff --git a/cumulusci/tasks/apex/batch.py b/cumulusci/tasks/apex/batch.py
index d10fad278e..fcb1732d40 100644
--- a/cumulusci/tasks/apex/batch.py
+++ b/cumulusci/tasks/apex/batch.py
@@ -1,4 +1,5 @@
-""" a task for waiting on a Batch Apex job to complete """
+"""a task for waiting on a Batch Apex job to complete"""
+
from datetime import datetime
from typing import Optional, Sequence
diff --git a/cumulusci/tasks/apex/testrunner.py b/cumulusci/tasks/apex/testrunner.py
index 5a51f655ed..e299ee1388 100644
--- a/cumulusci/tasks/apex/testrunner.py
+++ b/cumulusci/tasks/apex/testrunner.py
@@ -1,4 +1,4 @@
-""" CumulusCI Tasks for running Apex Tests """
+"""CumulusCI Tasks for running Apex Tests"""
import html
import io
@@ -264,7 +264,6 @@ def _init_class(self):
def _get_namespace_filter(self):
if self.options.get("managed"):
-
namespace = self.options.get("namespace")
if not namespace:
@@ -430,9 +429,9 @@ def _get_test_results(self, allow_retries=True):
for test_result in result["records"]:
class_name = self.classes_by_id[test_result["ApexClassId"]]
- self.results_by_class_name[class_name][
- test_result["MethodName"]
- ] = test_result
+ self.results_by_class_name[class_name][test_result["MethodName"]] = (
+ test_result
+ )
self.counts[test_result["Outcome"]] += 1
# If we have class-level failures that did not come with line-level
diff --git a/cumulusci/tasks/bulkdata/extract.py b/cumulusci/tasks/bulkdata/extract.py
index adcaa0f4e1..95b31dc485 100644
--- a/cumulusci/tasks/bulkdata/extract.py
+++ b/cumulusci/tasks/bulkdata/extract.py
@@ -95,7 +95,6 @@ def _init_db(self):
self.models = {}
with self._database_url() as database_url:
-
# initialize the DB engine
parent_engine = create_engine(database_url)
with parent_engine.connect() as connection:
diff --git a/cumulusci/tasks/bulkdata/extract_dataset_utils/extract_yml.py b/cumulusci/tasks/bulkdata/extract_dataset_utils/extract_yml.py
index 1ece5c1cd7..af84f728bd 100644
--- a/cumulusci/tasks/bulkdata/extract_dataset_utils/extract_yml.py
+++ b/cumulusci/tasks/bulkdata/extract_dataset_utils/extract_yml.py
@@ -43,9 +43,9 @@ def parse_field_complex_type(fieldspec):
def assert_sf_object_fits_pattern(self):
if self.is_group:
- assert (
- self.group_type in SFObjectGroupTypes
- ), f"Expected OBJECTS(ALL), OBJECTS(CUSTOM) or OBJECTS(STANDARD), not `{self.group_type.upper()}`"
+ assert self.group_type in SFObjectGroupTypes, (
+ f"Expected OBJECTS(ALL), OBJECTS(CUSTOM) or OBJECTS(STANDARD), not `{self.group_type.upper()}`"
+ )
else:
assert self.sf_object.isidentifier(), (
"Value should start with OBJECTS( or be a simple alphanumeric field name"
@@ -55,9 +55,9 @@ def assert_sf_object_fits_pattern(self):
def assert_check_where_against_complex(self):
"""Check that a where clause was not used with a group declaration."""
- assert not (
- self.where and self.is_group
- ), "Cannot specify a `where` clause on a declaration for multiple kinds of objects."
+ assert not (self.where and self.is_group), (
+ "Cannot specify a `where` clause on a declaration for multiple kinds of objects."
+ )
@validator("fields_")
def normalize_fields(cls, vals):
diff --git a/cumulusci/tasks/bulkdata/extract_dataset_utils/tests/test_synthesize_extract_declarations.py b/cumulusci/tasks/bulkdata/extract_dataset_utils/tests/test_synthesize_extract_declarations.py
index b690ee96e0..40f403dfbe 100644
--- a/cumulusci/tasks/bulkdata/extract_dataset_utils/tests/test_synthesize_extract_declarations.py
+++ b/cumulusci/tasks/bulkdata/extract_dataset_utils/tests/test_synthesize_extract_declarations.py
@@ -518,21 +518,26 @@ def _fake_get_org_schema(
object_counts: T.Dict[str, int],
**kwargs,
):
- with mock.patch(
- "cumulusci.salesforce_api.org_schema.count_sobjects",
- lambda *args: (
- object_counts,
- [],
- [],
+ with (
+ mock.patch(
+ "cumulusci.salesforce_api.org_schema.count_sobjects",
+ lambda *args: (
+ object_counts,
+ [],
+ [],
+ ),
+ ),
+ mock.patch(
+ "cumulusci.salesforce_api.org_schema.ZippableTempDb", FakeZippableTempDb
+ ),
+ mock.patch(
+ "cumulusci.salesforce_api.org_schema.deep_describe",
+ return_value=(
+ (desc, "Sat, 1 Jan 2000 00:00:01 GMT") for desc in org_describes
+ ),
),
- ), mock.patch(
- "cumulusci.salesforce_api.org_schema.ZippableTempDb", FakeZippableTempDb
- ), mock.patch(
- "cumulusci.salesforce_api.org_schema.deep_describe",
- return_value=((desc, "Sat, 1 Jan 2000 00:00:01 GMT") for desc in org_describes),
- ), get_org_schema(
- FakeSF(), org_config, **kwargs
- ) as schema:
+ get_org_schema(FakeSF(), org_config, **kwargs) as schema,
+ ):
yield schema
diff --git a/cumulusci/tasks/bulkdata/generate_mapping_utils/dependency_map.py b/cumulusci/tasks/bulkdata/generate_mapping_utils/dependency_map.py
index ebd5f6d2d4..23ecb84de0 100644
--- a/cumulusci/tasks/bulkdata/generate_mapping_utils/dependency_map.py
+++ b/cumulusci/tasks/bulkdata/generate_mapping_utils/dependency_map.py
@@ -43,9 +43,9 @@ def _map_references(
for dep in intertable_dependencies:
table_deps = self.dependencies[dep.table_name_from]
table_deps.add(dep)
- self.reference_fields[
- (dep.table_name_from, dep.field_name)
- ] = dep.table_names_to
+ self.reference_fields[(dep.table_name_from, dep.field_name)] = (
+ dep.table_names_to
+ )
def target_table_for(
self, tablename: str, fieldname: str
diff --git a/cumulusci/tasks/bulkdata/generate_mapping_utils/tests/test_generate_extract_mapping_from_declarations.py b/cumulusci/tasks/bulkdata/generate_mapping_utils/tests/test_generate_extract_mapping_from_declarations.py
index f51f94db39..c465d6722a 100644
--- a/cumulusci/tasks/bulkdata/generate_mapping_utils/tests/test_generate_extract_mapping_from_declarations.py
+++ b/cumulusci/tasks/bulkdata/generate_mapping_utils/tests/test_generate_extract_mapping_from_declarations.py
@@ -34,7 +34,7 @@ def test_simple_generate_mapping_from_declarations(self, org_config):
"api": "smart",
"sf_object": "Account",
"fields": ["Name", "Description"],
- "soql_filter": "Name != 'Sample Account for " "Entitlements'",
+ "soql_filter": "Name != 'Sample Account for Entitlements'",
}
}
diff --git a/cumulusci/tasks/bulkdata/load.py b/cumulusci/tasks/bulkdata/load.py
index 2bb148869d..2f378ddfad 100644
--- a/cumulusci/tasks/bulkdata/load.py
+++ b/cumulusci/tasks/bulkdata/load.py
@@ -750,9 +750,9 @@ def _initialize_id_table(self, should_reset_table):
Column("id", Unicode(255), primary_key=True),
Column("sf_id", Unicode(18)),
)
- if id_table.exists():
- id_table.drop()
- id_table.create()
+ if self.inspector.has_table(self.ID_TABLE_NAME):
+ id_table.drop(self.metadata.bind)
+ id_table.create(self.metadata.bind)
def _sqlite_load(self):
"""Read a SQLite script and initialize the in-memory database."""
diff --git a/cumulusci/tasks/bulkdata/mapping_parser.py b/cumulusci/tasks/bulkdata/mapping_parser.py
index 63ed9c48f1..98b2041936 100644
--- a/cumulusci/tasks/bulkdata/mapping_parser.py
+++ b/cumulusci/tasks/bulkdata/mapping_parser.py
@@ -47,6 +47,7 @@ def has_errors(self) -> bool:
class MappingLookup(CCIDictModel):
"Lookup relationship between two tables."
+
table: Union[str, List[str]] # Support for polymorphic lookups
key_field: Optional[str] = None
value_field: Optional[str] = None
@@ -102,6 +103,7 @@ class BulkMode(StrEnum):
class MappingStep(CCIDictModel):
"Step in a load or extract process"
+
sf_object: str
table: Optional[str] = None
fields_: Dict[str, str] = Field({}, alias="fields")
@@ -113,9 +115,9 @@ class MappingStep(CCIDictModel):
batch_size: int = None
oid_as_pk: bool = False # this one should be discussed and probably deprecated
record_type: Optional[str] = None # should be discussed and probably deprecated
- bulk_mode: Optional[
- Literal["Serial", "Parallel"]
- ] = None # default should come from task options
+ bulk_mode: Optional[Literal["Serial", "Parallel"]] = (
+ None # default should come from task options
+ )
anchor_date: Optional[Union[str, date]] = None
soql_filter: Optional[str] = None # soql_filter property
select_options: Optional[SelectOptions] = Field(
@@ -138,9 +140,9 @@ def split_update_key(cls, val):
if isinstance(val, str):
return tuple(v.strip() for v in val.split(","))
else:
- assert isinstance(
- val, (str, list, tuple)
- ), "`update_key` should be a field name or list of field names."
+ assert isinstance(val, (str, list, tuple)), (
+ "`update_key` should be a field name or list of field names."
+ )
assert False, "Should be unreachable" # pragma: no cover
@root_validator
@@ -336,9 +338,9 @@ def validate_update_key_and_upsert(cls, v):
if action == DataOperationType.UPSERT:
assert update_key, "'update_key' must always be supplied for upsert."
- assert (
- len(update_key) == 1
- ), "simple upserts can only support one field at a time."
+ assert len(update_key) == 1, (
+ "simple upserts can only support one field at a time."
+ )
elif action in (DataOperationType.ETL_UPSERT, DataOperationType.SMART_UPSERT):
assert update_key, "'update_key' must always be supplied for upsert."
else:
@@ -346,9 +348,9 @@ def validate_update_key_and_upsert(cls, v):
if update_key:
for key in update_key:
- assert key.lower() in (
- f.lower() for f in v["fields_"]
- ), f"`update_key`: {key} not found in `fields``"
+ assert key.lower() in (f.lower() for f in v["fields_"]), (
+ f"`update_key`: {key} not found in `fields``"
+ )
return v
@@ -677,6 +679,7 @@ def dict(self, by_alias=True, exclude_defaults=True, **kwargs):
class MappingSteps(CCIDictModel):
"Mapping of named steps"
+
__root__: Dict[str, MappingStep]
@root_validator(pre=False)
@@ -684,9 +687,9 @@ class MappingSteps(CCIDictModel):
def validate_and_inject_mapping(cls, values):
if values:
oids = ["Id" in s.fields_ for s in values["__root__"].values()]
- assert all(oids) or not any(
- oids
- ), "Id must be mapped in all steps or in no steps."
+ assert all(oids) or not any(oids), (
+ "Id must be mapped in all steps or in no steps."
+ )
return values
diff --git a/cumulusci/tasks/bulkdata/query_transformers.py b/cumulusci/tasks/bulkdata/query_transformers.py
index 3f632c694e..9827d9795a 100644
--- a/cumulusci/tasks/bulkdata/query_transformers.py
+++ b/cumulusci/tasks/bulkdata/query_transformers.py
@@ -243,7 +243,6 @@ def outerjoins_to_add(self):
]
except KeyError as f:
-
raise BulkDataException(
"A record type mapping table was not found in your dataset. "
f"Was it generated by extract_data? {e}",
diff --git a/cumulusci/tasks/bulkdata/snowfakery.py b/cumulusci/tasks/bulkdata/snowfakery.py
index 54645ac290..818d189488 100644
--- a/cumulusci/tasks/bulkdata/snowfakery.py
+++ b/cumulusci/tasks/bulkdata/snowfakery.py
@@ -61,7 +61,6 @@
class Snowfakery(BaseSalesforceApiTask):
-
task_docs = """
Do a data load with Snowfakery.
diff --git a/cumulusci/tasks/bulkdata/tests/test_extract.py b/cumulusci/tasks/bulkdata/tests/test_extract.py
index 996584a2a5..e6ccdcaa8d 100644
--- a/cumulusci/tasks/bulkdata/tests/test_extract.py
+++ b/cumulusci/tasks/bulkdata/tests/test_extract.py
@@ -45,12 +45,15 @@ def _job_state_from_batches(self, job_id):
def get_results(self):
return extracted_records[self.sobject]
- with mock.patch(
- "cumulusci.tasks.bulkdata.step.BulkApiQueryOperation.get_results",
- get_results,
- ), mock.patch(
- "cumulusci.tasks.bulkdata.step.BulkJobMixin._job_state_from_batches",
- _job_state_from_batches,
+ with (
+ mock.patch(
+ "cumulusci.tasks.bulkdata.step.BulkApiQueryOperation.get_results",
+ get_results,
+ ),
+ mock.patch(
+ "cumulusci.tasks.bulkdata.step.BulkJobMixin._job_state_from_batches",
+ _job_state_from_batches,
+ ),
):
yield
@@ -81,7 +84,6 @@ def query(self):
class TestExtractData:
-
mapping_file_v1 = "mapping_v1.yml"
mapping_file_v2 = "mapping_v2.yml"
mapping_file_poly = "mapping_poly.yml"
@@ -187,7 +189,6 @@ def test_run__person_accounts_enabled(self, query_op_mock):
task()
with create_engine(task.options["database_url"]).connect() as conn:
-
household = next(conn.execute("select * from households"))
assert household.sf_id == "1"
assert household.IsPersonAccount == "false"
@@ -238,12 +239,12 @@ def test_run__sql(self, query_op_mock):
task()
assert os.path.exists("testdata.sql")
- assert ce_mock.mock_calls[0][1][0].endswith(
- "temp_db.db"
- ), ce_mock.mock_calls[0][1][0]
- assert ce_mock.mock_calls[0][1][0].startswith(
- "sqlite:///"
- ), ce_mock.mock_calls[0][1][0]
+ assert ce_mock.mock_calls[0][1][0].endswith("temp_db.db"), (
+ ce_mock.mock_calls[0][1][0]
+ )
+ assert ce_mock.mock_calls[0][1][0].startswith("sqlite:///"), (
+ ce_mock.mock_calls[0][1][0]
+ )
@responses.activate
@mock.patch("cumulusci.tasks.bulkdata.extract.get_query_operation")
@@ -757,9 +758,7 @@ def test_convert_lookups_to_id(self):
}
task.session.query.return_value.filter.return_value.count.return_value = 0
- task.session.query.return_value.filter.return_value.update.return_value.rowcount = (
- 0
- )
+ task.session.query.return_value.filter.return_value.update.return_value.rowcount = 0
task._convert_lookups_to_id(
MappingStep(
sf_object="Opportunity",
@@ -1199,8 +1198,9 @@ def test_import_results__autopk(self, create_task_fixture):
]
],
}
- with mock_extract_jobs(task, extracted_records), mock_salesforce_client(
- task
+ with (
+ mock_extract_jobs(task, extracted_records),
+ mock_salesforce_client(task),
):
task()
with create_engine(task.options["database_url"]).connect() as conn:
@@ -1303,9 +1303,9 @@ def test_run_soql_filter_no_record_type(self):
)
soql = task._soql_for_mapping(mapping)
- assert (
- "WHERE Name = 'John Doe'" in soql
- ), "filter should be applied just on name"
- assert (
- "DeveloperName" not in soql
- ), "DeveloperName should not appear in the soql query as it is missing in mapping"
+ assert "WHERE Name = 'John Doe'" in soql, (
+ "filter should be applied just on name"
+ )
+ assert "DeveloperName" not in soql, (
+ "DeveloperName should not appear in the soql query as it is missing in mapping"
+ )
diff --git a/cumulusci/tasks/bulkdata/tests/test_generate_from_snowfakery_task.py b/cumulusci/tasks/bulkdata/tests/test_generate_from_snowfakery_task.py
index 87845ebc8b..54d4d5d59d 100644
--- a/cumulusci/tasks/bulkdata/tests/test_generate_from_snowfakery_task.py
+++ b/cumulusci/tasks/bulkdata/tests/test_generate_from_snowfakery_task.py
@@ -376,7 +376,9 @@ def test_generate_continuation_file(self):
)
task()
continuation_file = yaml.safe_load(open(temp_continuation_file))
- assert continuation_file # internals of this file are not important to CumulusCI
+ assert (
+ continuation_file
+ ) # internals of this file are not important to CumulusCI
def _get_mapping_file(self, **options):
with temporary_file_path("mapping.yml") as temp_mapping:
diff --git a/cumulusci/tasks/bulkdata/tests/test_load.py b/cumulusci/tasks/bulkdata/tests/test_load.py
index f413cf4ed7..1ffbd8a248 100644
--- a/cumulusci/tasks/bulkdata/tests/test_load.py
+++ b/cumulusci/tasks/bulkdata/tests/test_load.py
@@ -266,11 +266,14 @@ def test__perform_rollback(self):
task.metadata = mock.Mock()
task.metadata.sorted_tables = [table_insert, table_upsert]
- with mock.patch.object(
- CreateRollback, "_perform_rollback"
- ) as mock_insert_rollback, mock.patch.object(
- UpdateRollback, "_perform_rollback"
- ) as mock_upsert_rollback:
+ with (
+ mock.patch.object(
+ CreateRollback, "_perform_rollback"
+ ) as mock_insert_rollback,
+ mock.patch.object(
+ UpdateRollback, "_perform_rollback"
+ ) as mock_upsert_rollback,
+ ):
Rollback._perform_rollback(task)
mock_insert_rollback.assert_called_once_with(task, table_insert)
@@ -862,9 +865,10 @@ def test_process_lookup_fields_polymorphic(self):
"Who.Contact.LastName",
"Who.Lead.LastName",
}
- with mock.patch(
- "cumulusci.tasks.bulkdata.load.validate_and_inject_mapping"
- ), mock.patch.object(task, "sf", create=True):
+ with (
+ mock.patch("cumulusci.tasks.bulkdata.load.validate_and_inject_mapping"),
+ mock.patch.object(task, "sf", create=True),
+ ):
task._init_mapping()
with task._init_db():
task._old_format = mock.Mock(return_value=False)
@@ -911,9 +915,10 @@ def test_process_lookup_fields_non_polymorphic(self):
"Account.Name",
"Account.AccountNumber",
}
- with mock.patch(
- "cumulusci.tasks.bulkdata.load.validate_and_inject_mapping"
- ), mock.patch.object(task, "sf", create=True):
+ with (
+ mock.patch("cumulusci.tasks.bulkdata.load.validate_and_inject_mapping"),
+ mock.patch.object(task, "sf", create=True),
+ ):
task._init_mapping()
with task._init_db():
task._old_format = mock.Mock(return_value=False)
@@ -981,15 +986,17 @@ def test_get_statics_record_type_not_matched(self):
task.sf = mock.Mock()
task.sf.query.return_value = {"records": []}
with pytest.raises(BulkDataException) as e:
- task._get_statics(
- MappingStep(
- sf_object="Account",
- action="insert",
- fields={"Id": "sf_id", "Name": "Name"},
- static={"Industry": "Technology"},
- record_type="Organization",
- )
- ),
+ (
+ task._get_statics(
+ MappingStep(
+ sf_object="Account",
+ action="insert",
+ fields={"Id": "sf_id", "Name": "Name"},
+ static={"Industry": "Technology"},
+ record_type="Organization",
+ )
+ ),
+ )
assert "RecordType" in str(e.value)
def test_query_db__joins_self_lookups(self):
@@ -1290,7 +1297,7 @@ def test_initialize_id_table__already_exists(self):
id_table.create()
task._initialize_id_table(True)
new_id_table = task.metadata.tables["cumulusci_id_table"]
- assert not (new_id_table is id_table)
+ assert new_id_table is not id_table
def test_initialize_id_table__already_exists_and_should_not_reset_table(self):
task = _make_task(
@@ -1469,9 +1476,10 @@ def test_process_job_results__exception_failure(self):
mapping = MappingStep(sf_object="Account", action=DataOperationType.UPDATE)
- with mock.patch(
- "cumulusci.tasks.bulkdata.load.sql_bulk_insert_from_records"
- ), pytest.raises(BulkDataException) as e:
+ with (
+ mock.patch("cumulusci.tasks.bulkdata.load.sql_bulk_insert_from_records"),
+ pytest.raises(BulkDataException) as e,
+ ):
task._process_job_results(mapping, step, local_ids)
assert "Error on record with id" in str(e.value)
@@ -1880,11 +1888,15 @@ def test_generate_results_id_map__exception_failure_with_rollback(self):
]
)
- with pytest.raises(BulkDataException) as e, mock.patch(
- "cumulusci.tasks.bulkdata.load.Rollback._perform_rollback"
- ) as mock_rollback, mock.patch(
- "cumulusci.tasks.bulkdata.load.sql_bulk_insert_from_records"
- ) as mock_insert_records:
+ with (
+ pytest.raises(BulkDataException) as e,
+ mock.patch(
+ "cumulusci.tasks.bulkdata.load.Rollback._perform_rollback"
+ ) as mock_rollback,
+ mock.patch(
+ "cumulusci.tasks.bulkdata.load.sql_bulk_insert_from_records"
+ ) as mock_insert_records,
+ ):
task._generate_results_id_map(
step, ["001000000000009", "001000000000010", "001000000000011"]
)
@@ -2880,14 +2892,16 @@ def _job_state_from_batches(self, job_id):
MEGABYTE = 2**20
# FIXME: more anlysis about the number below
- with mock.patch(
- "cumulusci.tasks.bulkdata.step.BulkJobMixin._job_state_from_batches",
- _job_state_from_batches,
- ), mock.patch(
- "cumulusci.tasks.bulkdata.step.BulkApiDmlOperation.get_results",
- get_results,
- ), assert_max_memory_usage(
- 15 * MEGABYTE
+ with (
+ mock.patch(
+ "cumulusci.tasks.bulkdata.step.BulkJobMixin._job_state_from_batches",
+ _job_state_from_batches,
+ ),
+ mock.patch(
+ "cumulusci.tasks.bulkdata.step.BulkApiDmlOperation.get_results",
+ get_results,
+ ),
+ assert_max_memory_usage(15 * MEGABYTE),
):
task()
@@ -3057,9 +3071,10 @@ def test_smart_lookup__mixed_sf_ids_and_local_refs(self):
},
)
- with mock.patch(
- "cumulusci.tasks.bulkdata.load.validate_and_inject_mapping"
- ), mock.patch.object(task, "sf", create=True):
+ with (
+ mock.patch("cumulusci.tasks.bulkdata.load.validate_and_inject_mapping"),
+ mock.patch.object(task, "sf", create=True),
+ ):
task._init_mapping()
with task._init_db():
@@ -3193,9 +3208,10 @@ def _validate_query_for_mapping_step(
}
},
)
- with mock.patch(
- "cumulusci.tasks.bulkdata.load.validate_and_inject_mapping"
- ), mock.patch.object(task, "sf", create=True):
+ with (
+ mock.patch("cumulusci.tasks.bulkdata.load.validate_and_inject_mapping"),
+ mock.patch.object(task, "sf", create=True),
+ ):
task._init_mapping()
with task._init_db():
task._old_format = mock.Mock(return_value=old_format)
diff --git a/cumulusci/tasks/bulkdata/tests/test_select_utils.py b/cumulusci/tasks/bulkdata/tests/test_select_utils.py
index 3c9addd32d..efa9502902 100644
--- a/cumulusci/tasks/bulkdata/tests/test_select_utils.py
+++ b/cumulusci/tasks/bulkdata/tests/test_select_utils.py
@@ -397,9 +397,9 @@ def test_calculate_levenshtein_distance_basic():
expected_distance = (1 / 5 * 1.0 + 1 / 5 * 1.0) / 2 # Averaged over two fields
result = calculate_levenshtein_distance(record1, record2, weights)
- assert result == pytest.approx(
- expected_distance
- ), "Basic distance calculation failed."
+ assert result == pytest.approx(expected_distance), (
+ "Basic distance calculation failed."
+ )
# Empty fields
record1 = ["hello", ""]
@@ -411,9 +411,9 @@ def test_calculate_levenshtein_distance_basic():
expected_distance = (1 / 5 * 1.0 + 0 * 1.0) / 2 # Averaged over two fields
result = calculate_levenshtein_distance(record1, record2, weights)
- assert result == pytest.approx(
- expected_distance
- ), "Basic distance calculation with empty fields failed."
+ assert result == pytest.approx(expected_distance), (
+ "Basic distance calculation with empty fields failed."
+ )
# Partial empty fields
record1 = ["hello", "world"]
@@ -427,9 +427,9 @@ def test_calculate_levenshtein_distance_basic():
) / 2 # Averaged over two fields
result = calculate_levenshtein_distance(record1, record2, weights)
- assert result == pytest.approx(
- expected_distance
- ), "Basic distance calculation with partial empty fields failed."
+ assert result == pytest.approx(expected_distance), (
+ "Basic distance calculation with partial empty fields failed."
+ )
def test_calculate_levenshtein_distance_weighted():
@@ -443,9 +443,9 @@ def test_calculate_levenshtein_distance_weighted():
) / 2.5 # Weighted average over two fields
result = calculate_levenshtein_distance(record1, record2, weights)
- assert result == pytest.approx(
- expected_distance
- ), "Weighted distance calculation failed."
+ assert result == pytest.approx(expected_distance), (
+ "Weighted distance calculation failed."
+ )
def test_calculate_levenshtein_distance_records_length_doesnt_match():
@@ -600,18 +600,18 @@ def test_vectorize_records_mixed_numerical_boolean_categorical():
# Check the shape of the output vectors
assert final_db_vectors.shape[0] == len(db_records), "DB vectors row count mismatch"
- assert final_query_vectors.shape[0] == len(
- query_records
- ), "Query vectors row count mismatch"
+ assert final_query_vectors.shape[0] == len(query_records), (
+ "Query vectors row count mismatch"
+ )
# Expected dimensions: numerical (1) + categorical hashed features (4)
expected_feature_count = 2 + hash_features
- assert (
- final_db_vectors.shape[1] == expected_feature_count
- ), "DB vectors column count mismatch"
- assert (
- final_query_vectors.shape[1] == expected_feature_count
- ), "Query vectors column count mismatch"
+ assert final_db_vectors.shape[1] == expected_feature_count, (
+ "DB vectors column count mismatch"
+ )
+ assert final_query_vectors.shape[1] == expected_feature_count, (
+ "Query vectors column count mismatch"
+ )
def _build_large_annoy_fixture():
diff --git a/cumulusci/tasks/bulkdata/tests/test_snowfakery.py b/cumulusci/tasks/bulkdata/tests/test_snowfakery.py
index 22789aac9b..9f4b6d77d1 100644
--- a/cumulusci/tasks/bulkdata/tests/test_snowfakery.py
+++ b/cumulusci/tasks/bulkdata/tests/test_snowfakery.py
@@ -95,7 +95,6 @@ def __call__(self, *args, **kwargs):
in a normal mock_values structure."""
with self.lock: # the code below looks thread-safe but better safe than sorry
-
# tasks usually aren't called twice after being instantiated
# that would usually be a bug.
assert self not in self.mock_calls
@@ -153,11 +152,14 @@ def mock_load_data(
):
fake_load_data = FakeLoadData
- with mock.patch(
- "cumulusci.tasks.bulkdata.generate_and_load_data.LoadData", fake_load_data
- ), mock.patch(
- "cumulusci.tasks.bulkdata.snowfakery_utils.queue_manager.LoadData",
- fake_load_data,
+ with (
+ mock.patch(
+ "cumulusci.tasks.bulkdata.generate_and_load_data.LoadData", fake_load_data
+ ),
+ mock.patch(
+ "cumulusci.tasks.bulkdata.snowfakery_utils.queue_manager.LoadData",
+ fake_load_data,
+ ),
):
fake_load_data.reset()
@@ -187,12 +189,15 @@ def __call__(self, target, args, daemon):
process_manager = FakeProcessManager()
- with mock.patch(
- "cumulusci.utils.parallel.task_worker_queues.parallel_worker_queue.WorkerQueue.Thread",
- process_manager,
- ), mock.patch(
- "cumulusci.utils.parallel.task_worker_queues.parallel_worker_queue.WorkerQueue.Process",
- process_manager,
+ with (
+ mock.patch(
+ "cumulusci.utils.parallel.task_worker_queues.parallel_worker_queue.WorkerQueue.Thread",
+ process_manager,
+ ),
+ mock.patch(
+ "cumulusci.utils.parallel.task_worker_queues.parallel_worker_queue.WorkerQueue.Process",
+ process_manager,
+ ),
):
yield process_manager
@@ -394,9 +399,9 @@ def get_record_counts_from_snowfakery_results(
channeled_outboxes = tuple(results.working_dir.glob("*/data_load_outbox/*"))
regular_outboxes = tuple(results.working_dir.glob("data_load_outbox/*"))
- assert bool(regular_outboxes) ^ bool(
- channeled_outboxes
- ), f"One of regular_outboxes or channeled_outboxes should be available: {channeled_outboxes}, {regular_outboxes}"
+ assert bool(regular_outboxes) ^ bool(channeled_outboxes), (
+ f"One of regular_outboxes or channeled_outboxes should be available: {channeled_outboxes}, {regular_outboxes}"
+ )
outboxes = tuple(channeled_outboxes) + tuple(regular_outboxes)
for subdir in outboxes:
record_counts = SnowfakeryWorkingDirectory(subdir).get_record_counts()
@@ -559,9 +564,9 @@ def test_run_until_records_in_org__none_needed(
)
task()
assert len(mock_load_data.mock_calls) == 0, mock_load_data.mock_calls
- assert (
- len(threads_instead_of_processes.mock_calls) == 0
- ), threads_instead_of_processes.mock_calls
+ assert len(threads_instead_of_processes.mock_calls) == 0, (
+ threads_instead_of_processes.mock_calls
+ )
@pytest.mark.vcr()
@mock.patch("cumulusci.tasks.bulkdata.snowfakery.MIN_PORTION_SIZE", 5)
@@ -600,9 +605,9 @@ def test_run_until_records_in_org__multiple_needed(
task()
assert len(mock_load_data.mock_calls) == 2, mock_load_data.mock_calls
- assert (
- len(threads_instead_of_processes.mock_calls) == 1
- ), threads_instead_of_processes.mock_calls
+ assert len(threads_instead_of_processes.mock_calls) == 1, (
+ threads_instead_of_processes.mock_calls
+ )
def test_inaccessible_generator_yaml(self, snowfakery):
with pytest.raises(exc.TaskOptionsError, match="recipe"):
@@ -622,9 +627,12 @@ def test_snowfakery_debug_mode_and_cpu_count(self, snowfakery, mock_load_data):
@mock.patch("cumulusci.tasks.bulkdata.snowfakery.MIN_PORTION_SIZE", 3)
def test_record_count(self, snowfakery, mock_load_data):
task = snowfakery(recipe="datasets/recipe.yml", run_until_recipe_repeated="4")
- with mock.patch.object(task, "logger") as logger, mock.patch.object(
- task.project_config, "keychain", DummyKeychain()
- ) as keychain:
+ with (
+ mock.patch.object(task, "logger") as logger,
+ mock.patch.object(
+ task.project_config, "keychain", DummyKeychain()
+ ) as keychain,
+ ):
def get_org(username):
return DummyOrgConfig(
@@ -904,9 +912,12 @@ def test_channels_cli_options_conflict(self, create_task):
"recipe_options": {"xyzzy": "Nothing happens", "some_number": 37},
},
)
- with pytest.raises(exc.TaskOptionsError) as e, mock.patch.object(
- task.project_config, "keychain", DummyKeychain()
- ) as keychain:
+ with (
+ pytest.raises(exc.TaskOptionsError) as e,
+ mock.patch.object(
+ task.project_config, "keychain", DummyKeychain()
+ ) as keychain,
+ ):
def get_org(username):
return DummyOrgConfig(
@@ -933,9 +944,12 @@ def test_explicit_channel_declarations(self, mock_load_data, create_task):
/ "snowfakery/simple_snowfakery_channels.load.yml",
},
)
- with pytest.warns(UserWarning), mock.patch.object(
- task.project_config, "keychain", DummyKeychain()
- ) as keychain:
+ with (
+ pytest.warns(UserWarning),
+ mock.patch.object(
+ task.project_config, "keychain", DummyKeychain()
+ ) as keychain,
+ ):
def get_org(username):
return DummyOrgConfig(
@@ -1159,9 +1173,12 @@ def test_too_many_channel_declarations(self, mock_load_data, create_task):
/ "snowfakery/simple_snowfakery_channels_2.load.yml",
},
)
- with pytest.raises(exc.TaskOptionsError), mock.patch.object(
- task.project_config, "keychain", DummyKeychain()
- ) as keychain:
+ with (
+ pytest.raises(exc.TaskOptionsError),
+ mock.patch.object(
+ task.project_config, "keychain", DummyKeychain()
+ ) as keychain,
+ ):
def get_org(username):
return DummyOrgConfig(
diff --git a/cumulusci/tasks/bulkdata/tests/test_step.py b/cumulusci/tasks/bulkdata/tests/test_step.py
index 25a8362a54..4bbc4e6880 100644
--- a/cumulusci/tasks/bulkdata/tests/test_step.py
+++ b/cumulusci/tasks/bulkdata/tests/test_step.py
@@ -139,9 +139,9 @@ def test_parse_job_state(self):
" 200"
" "
""
- ) == DataOperationJobResult(
- DataOperationStatus.ROW_FAILURE, [], 0, 400
- ), "Multiple batches in single job"
+ ) == DataOperationJobResult(DataOperationStatus.ROW_FAILURE, [], 0, 400), (
+ "Multiple batches in single job"
+ )
assert mixin._parse_job_state(
''
@@ -150,9 +150,9 @@ def test_parse_job_state(self):
" 200"
" "
""
- ) == DataOperationJobResult(
- DataOperationStatus.ROW_FAILURE, [], 0, 200
- ), "Single batch"
+ ) == DataOperationJobResult(DataOperationStatus.ROW_FAILURE, [], 0, 200), (
+ "Single batch"
+ )
assert mixin._parse_job_state(
''
@@ -167,9 +167,9 @@ def test_parse_job_state(self):
" 10"
" "
""
- ) == DataOperationJobResult(
- DataOperationStatus.ROW_FAILURE, [], 20, 400
- ), "Multiple batches in single job"
+ ) == DataOperationJobResult(DataOperationStatus.ROW_FAILURE, [], 20, 400), (
+ "Multiple batches in single job"
+ )
assert mixin._parse_job_state(
''
@@ -177,9 +177,9 @@ def test_parse_job_state(self):
" 200"
" 10"
""
- ) == DataOperationJobResult(
- DataOperationStatus.ROW_FAILURE, [], 10, 200
- ), "Single batch"
+ ) == DataOperationJobResult(DataOperationStatus.ROW_FAILURE, [], 10, 200), (
+ "Single batch"
+ )
@mock.patch("time.sleep")
def test_wait_for_job(self, sleep_patch):
@@ -527,8 +527,12 @@ def test_get_prev_record_values(self):
step.bulk.get_all_results_for_query_batch.return_value = results
records = iter([["Test1"], ["Test2"], ["Test3"]])
- with mock.patch("json.load", side_effect=lambda result: result), mock.patch(
- "salesforce_bulk.util.IteratorBytesIO", side_effect=lambda result: result
+ with (
+ mock.patch("json.load", side_effect=lambda result: result),
+ mock.patch(
+ "salesforce_bulk.util.IteratorBytesIO",
+ side_effect=lambda result: result,
+ ),
):
prev_record_values, relevant_fields = step.get_prev_record_values(records)
diff --git a/cumulusci/tasks/bulkdata/tests/test_updates.py b/cumulusci/tasks/bulkdata/tests/test_updates.py
index da469b0dca..c5be4cd81f 100644
--- a/cumulusci/tasks/bulkdata/tests/test_updates.py
+++ b/cumulusci/tasks/bulkdata/tests/test_updates.py
@@ -40,12 +40,15 @@ def activate(self, func):
@wraps(func)
def wrapper(*args, **kwds):
self.mock_bulk_API_responses_context = MockBulkAPIResponsesContext()
- with mock.patch(
- "cumulusci.tasks.bulkdata.update_data.get_query_operation",
- self.mock_bulk_API_responses_context.get_query_operation,
- ), mock.patch(
- "cumulusci.tasks.bulkdata.update_data.get_dml_operation",
- self.mock_bulk_API_responses_context.get_dml_operation,
+ with (
+ mock.patch(
+ "cumulusci.tasks.bulkdata.update_data.get_query_operation",
+ self.mock_bulk_API_responses_context.get_query_operation,
+ ),
+ mock.patch(
+ "cumulusci.tasks.bulkdata.update_data.get_dml_operation",
+ self.mock_bulk_API_responses_context.get_dml_operation,
+ ),
):
try:
ret = func(*args, **kwds)
@@ -492,7 +495,6 @@ def test_update_row_errors_exception_catching(self, create_task):
class TestUpdatesIntegrationTests:
-
# VCR doesn't match because of randomized data
@pytest.mark.vcr()
def test_updates_task(self, create_task, ensure_accounts):
diff --git a/cumulusci/tasks/bulkdata/tests/test_upsert.py b/cumulusci/tasks/bulkdata/tests/test_upsert.py
index aa23c50fcb..e6da87212e 100644
--- a/cumulusci/tasks/bulkdata/tests/test_upsert.py
+++ b/cumulusci/tasks/bulkdata/tests/test_upsert.py
@@ -236,9 +236,9 @@ def test_upsert_rest__faked(
relevant_debug_statement = look_for_operation_creation_debug_statement(
task.logger.debug.mock_calls
)
- assert relevant_debug_statement == format(
- DataApi.REST
- ), relevant_debug_statement
+ assert relevant_debug_statement == format(DataApi.REST), (
+ relevant_debug_statement
+ )
def _mock_bulk(self, domain):
responses.add(
@@ -395,40 +395,43 @@ def test_upsert__fake_bulk(self, create_task, cumulusci_test_repo_root, org_conf
with mock.patch.object(task.logger, "debug"):
ret = task()
- assert ret == {
- "step_results": {
- "Insert Accounts": {
- "sobject": "Account",
- "record_type": None,
- "status": DataOperationStatus.SUCCESS,
- "job_errors": [],
- "records_processed": 0,
- "total_row_errors": 0,
- },
- "Upsert Contacts": {
- "sobject": "Contact",
- "record_type": None,
- "status": DataOperationStatus.SUCCESS,
- "job_errors": [],
- "records_processed": 0, # change here and above to 4 to match data
- "total_row_errors": 0,
- },
- "Insert Opportunities": {
- "sobject": "Opportunity",
- "record_type": None,
- "status": DataOperationStatus.SUCCESS,
- "job_errors": [],
- "records_processed": 0,
- "total_row_errors": 0,
- },
+ assert (
+ ret
+ == {
+ "step_results": {
+ "Insert Accounts": {
+ "sobject": "Account",
+ "record_type": None,
+ "status": DataOperationStatus.SUCCESS,
+ "job_errors": [],
+ "records_processed": 0,
+ "total_row_errors": 0,
+ },
+ "Upsert Contacts": {
+ "sobject": "Contact",
+ "record_type": None,
+ "status": DataOperationStatus.SUCCESS,
+ "job_errors": [],
+ "records_processed": 0, # change here and above to 4 to match data
+ "total_row_errors": 0,
+ },
+ "Insert Opportunities": {
+ "sobject": "Opportunity",
+ "record_type": None,
+ "status": DataOperationStatus.SUCCESS,
+ "job_errors": [],
+ "records_processed": 0,
+ "total_row_errors": 0,
+ },
+ }
}
- }, ret
+ ), ret
relevant_debug_statement = look_for_operation_creation_debug_statement(
task.logger.debug.mock_calls
)
- assert relevant_debug_statement in format(
- DataApi.BULK
- ), relevant_debug_statement
+ assert relevant_debug_statement in format(DataApi.BULK), (
+ relevant_debug_statement
+ )
def _test_two_upserts_and_check_results__complex(
self, api, create_task, cumulusci_test_repo_root, sf
diff --git a/cumulusci/tasks/command.py b/cumulusci/tasks/command.py
index 1935fc3e6f..ad227b28c3 100644
--- a/cumulusci/tasks/command.py
+++ b/cumulusci/tasks/command.py
@@ -1,4 +1,4 @@
-""" Tasks for running a command in a subprocess
+"""Tasks for running a command in a subprocess
Command - run a command with optional environment variables
SalesforceCommand - run a command with credentials passed
diff --git a/cumulusci/tasks/create_package_version.py b/cumulusci/tasks/create_package_version.py
index 9933b3e49e..ff15b2a905 100644
--- a/cumulusci/tasks/create_package_version.py
+++ b/cumulusci/tasks/create_package_version.py
@@ -271,7 +271,7 @@ def _run_task(self):
self.options.get("install_key"),
)
res = self.tooling.query(
- "SELECT Dependencies FROM SubscriberPackageVersion " f"WHERE {where_clause}"
+ f"SELECT Dependencies FROM SubscriberPackageVersion WHERE {where_clause}"
)
self.return_values["dependencies"] = self._prepare_cci_dependencies(
res["records"][0]["Dependencies"]
@@ -323,7 +323,7 @@ def _get_or_create_package(self, package_config: PackageConfig):
if existing_package["ContainerOptions"] != package_config.package_type:
raise PackageUploadFailure(
f"Duplicate Package: {existing_package['ContainerOptions']} package with id "
- f"{ existing_package['Id']} has the same name ({package_config.package_name}) "
+ f"{existing_package['Id']} has the same name ({package_config.package_name}) "
"for this namespace but has a different package type"
)
package_id = existing_package["Id"]
@@ -391,9 +391,9 @@ def _create_version_request(
}
if package_config.post_install_script:
- package_descriptor[
- "postInstallScript"
- ] = package_config.post_install_script
+ package_descriptor["postInstallScript"] = (
+ package_config.post_install_script
+ )
if package_config.uninstall_script:
package_descriptor["uninstallScript"] = package_config.uninstall_script
diff --git a/cumulusci/tasks/datadictionary.py b/cumulusci/tasks/datadictionary.py
index 04418512ce..9eec3a61e1 100644
--- a/cumulusci/tasks/datadictionary.py
+++ b/cumulusci/tasks/datadictionary.py
@@ -133,14 +133,14 @@ def _init_options(self, kwargs):
super()._init_options(kwargs)
if self.options.get("object_path") is None:
- self.options[
- "object_path"
- ] = f"{self.project_config.project__name} Objects.csv"
+ self.options["object_path"] = (
+ f"{self.project_config.project__name} Objects.csv"
+ )
if self.options.get("field_path") is None:
- self.options[
- "field_path"
- ] = f"{self.project_config.project__name} Fields.csv"
+ self.options["field_path"] = (
+ f"{self.project_config.project__name} Fields.csv"
+ )
include_dependencies = self.options.get("include_dependencies")
self.options["include_dependencies"] = process_bool_arg(
@@ -679,13 +679,13 @@ def _write_field_results(self, file_handle):
# Locate the last versions where the valid values and the help text changed.
valid_values_version = None
- for (index, version) in enumerate(versions[1:]):
+ for index, version in enumerate(versions[1:]):
if version.valid_values != last_version.valid_values:
valid_values_version = versions[index]
break
help_text_version = None
- for (index, version) in enumerate(versions[1:]):
+ for index, version in enumerate(versions[1:]):
if version.help_text != last_version.help_text:
help_text_version = versions[index]
break
diff --git a/cumulusci/tasks/github/merge.py b/cumulusci/tasks/github/merge.py
index 1cc32887fa..569bbe578f 100644
--- a/cumulusci/tasks/github/merge.py
+++ b/cumulusci/tasks/github/merge.py
@@ -45,13 +45,13 @@ def _init_options(self, kwargs):
if "commit" not in self.options:
self.options["commit"] = self.project_config.repo_commit
if "branch_prefix" not in self.options:
- self.options[
- "branch_prefix"
- ] = self.project_config.project__git__prefix_feature
+ self.options["branch_prefix"] = (
+ self.project_config.project__git__prefix_feature
+ )
if "source_branch" not in self.options:
- self.options[
- "source_branch"
- ] = self.project_config.project__git__default_branch
+ self.options["source_branch"] = (
+ self.project_config.project__git__default_branch
+ )
if "skip_future_releases" not in self.options:
self.options["skip_future_releases"] = True
else:
diff --git a/cumulusci/tasks/github/release.py b/cumulusci/tasks/github/release.py
index bda1703145..48ff28b2ea 100644
--- a/cumulusci/tasks/github/release.py
+++ b/cumulusci/tasks/github/release.py
@@ -13,7 +13,6 @@
class CreateRelease(BaseGithubTask):
-
task_options = {
"version": {
"description": "The managed package version number. Ex: 1.2",
diff --git a/cumulusci/tasks/github/tests/test_merge.py b/cumulusci/tasks/github/tests/test_merge.py
index 80743f7cb3..70c249e433 100644
--- a/cumulusci/tasks/github/tests/test_merge.py
+++ b/cumulusci/tasks/github/tests/test_merge.py
@@ -1019,7 +1019,7 @@ def test_is_release_branch(self):
]
invalid_release_branches = [
f"{prefix}200_",
- f"{prefix}_200" f"{prefix}230_",
+ f"{prefix}_200{prefix}230_",
f"{prefix}230__child",
f"{prefix}230__grand__child",
f"{prefix}230a",
diff --git a/cumulusci/tasks/github/util.py b/cumulusci/tasks/github/util.py
index c838a20bd8..35830897b1 100644
--- a/cumulusci/tasks/github/util.py
+++ b/cumulusci/tasks/github/util.py
@@ -89,7 +89,7 @@ def _create_new_tree_item(self, item: dict) -> Optional[dict]:
"""
if not item["path"].startswith(self.repo_dir):
# outside target dir in repo - keep in tree
- self.logger.debug(f'Unchanged (outside target path): {item["path"]}')
+ self.logger.debug(f"Unchanged (outside target path): {item['path']}")
return None
local_path, content = self._find_and_read_item(item)
@@ -105,7 +105,7 @@ def _create_new_tree_item(self, item: dict) -> Optional[dict]:
new_item.pop("sha", None)
new_item.update(self._get_content_or_sha(content, self.dry_run))
else:
- self.logger.debug(f'Unchanged: {item["path"]}')
+ self.logger.debug(f"Unchanged: {item['path']}")
return None
return new_item
@@ -124,7 +124,7 @@ def _add_new_files_to_tree(self, new_tree_list):
if local_file_subpath not in new_tree_target_subpaths:
repo_path = f"{self.repo_dir}/" if self.repo_dir else ""
new_item = {
- "path": f'{repo_path}{local_file_subpath.replace(os.sep, "/")}',
+ "path": f"{repo_path}{local_file_subpath.replace(os.sep, '/')}",
"mode": "100644", # FIXME: This is wrong
"type": "blob",
}
diff --git a/cumulusci/tasks/marketing_cloud/deploy.py b/cumulusci/tasks/marketing_cloud/deploy.py
index 840fe942b4..63ef4e7bd8 100644
--- a/cumulusci/tasks/marketing_cloud/deploy.py
+++ b/cumulusci/tasks/marketing_cloud/deploy.py
@@ -301,9 +301,9 @@ def _poll_update_interval(self):
def _process_completed_deploy(self, response_data: Dict):
deploy_status = response_data["status"]
- assert (
- deploy_status != IN_PROGRESS_STATUSES
- ), "Deploy should be in a completed state before processing."
+ assert deploy_status != IN_PROGRESS_STATUSES, (
+ "Deploy should be in a completed state before processing."
+ )
self.poll_complete = True
if deploy_status in FINISHED_STATUSES:
diff --git a/cumulusci/tasks/metadata/package.py b/cumulusci/tasks/metadata/package.py
index d7689d8eb0..fce285806f 100644
--- a/cumulusci/tasks/metadata/package.py
+++ b/cumulusci/tasks/metadata/package.py
@@ -46,7 +46,6 @@ def process_common_components(response_messages: List, components: Dict):
if not response_messages or not components:
return components
for message in response_messages:
-
message_list = message.firstChild.nextSibling.firstChild.nodeValue.split("'")
if len(message_list) > 1:
component_type = message_list[1]
@@ -353,7 +352,6 @@ class ParserConfigurationError(Exception):
class MetadataXmlElementParser(BaseMetadataParser):
-
namespaces = {"sf": "http://soap.sforce.com/2006/04/metadata"}
def __init__(
diff --git a/cumulusci/tasks/metadata/tests/test_package.py b/cumulusci/tasks/metadata/tests/test_package.py
index 4ba10fba43..c489be8bff 100644
--- a/cumulusci/tasks/metadata/tests/test_package.py
+++ b/cumulusci/tasks/metadata/tests/test_package.py
@@ -262,9 +262,7 @@ def test_parser(self):
assert """ Test.TestTestMDT
- """ == "\n".join(
- result
- )
+ """ == "\n".join(result)
def test_parser__missing_item_xpath(self):
with pytest.raises(ParserConfigurationError):
diff --git a/cumulusci/tasks/metadata_etl/sharing.py b/cumulusci/tasks/metadata_etl/sharing.py
index 1989c19bb3..2ce134ae6d 100644
--- a/cumulusci/tasks/metadata_etl/sharing.py
+++ b/cumulusci/tasks/metadata_etl/sharing.py
@@ -102,7 +102,7 @@ def _poll_action(self):
elapsed = datetime.now() - self.time_start
if elapsed.total_seconds() > self.options["timeout"]:
raise CumulusCIException(
- f'Sharing enablement not completed after {self.options["timeout"]} seconds'
+ f"Sharing enablement not completed after {self.options['timeout']} seconds"
)
for sobject in self.owds:
diff --git a/cumulusci/tasks/push/README.md b/cumulusci/tasks/push/README.md
index a7d7bb8bbd..2d858d6cdd 100644
--- a/cumulusci/tasks/push/README.md
+++ b/cumulusci/tasks/push/README.md
@@ -1,21 +1,20 @@
# Push Upgrade API Scripts
-These scripts are designed to work with the Salesforce Push Upgrade API (in Pilot in Winter 16) which exposes new objects via the Tooling API that allow interacting with push upgrades in a packaging org. The main purpose of these scripts is to use the Push Upgrade API to automate push upgrades through Jenkins.
+These scripts are designed to work with the Salesforce Push Upgrade API (in Pilot in Winter 16) which exposes new objects via the Tooling API that allow interacting with push upgrades in a packaging org. The main purpose of these scripts is to use the Push Upgrade API to automate push upgrades through Jenkins.
# push_api.py - Python Wrapper for Push Upgrade API
-This python file provides wrapper classes around the Tooling API objects and abstracts interaction with them and their related data to make writing scripts easier. All the other scripts in this directory use the SalesforcePushApi wrapper to interact with the Tooling API.
+This python file provides wrapper classes around the Tooling API objects and abstracts interaction with them and their related data to make writing scripts easier. All the other scripts in this directory use the SalesforcePushApi wrapper to interact with the Tooling API.
Initializing the SalesforcePushApi wrapper can be done with the following python code:
push_api = SalesforcePushApi(sf_user, sf_pass, sf_serverurl)
You can also pass two optional keyword args to the initialization to control the wrapper's behavior
-
-* **lazy**: A list of objects that should be lazily looked up. Currently, the only implementations for this are 'jobs' and 'subscribers'. If either are included in the list, they will be looked up on demand when needed by a referenced object. For example, if you are querying all jobs and subscribers is not set to lazy, all subscribers will first be retrieved. If lazy is enabled, subscriber orgs will only be retrieved when trying to resolve references for a particular job. Generally, if you have a lot of subscribers and only expect your script to need to lookup a small number of them, enabling lazy for subscribers will reduce api calls and cause the script to run faster.
-* **default_where**: A dictionary with Push Upgrade API objects as key and a value containing a SOQL WHERE clause statement which is applied to all queries against the object to effectively set the universe for a given object. For example:
-
+- **lazy**: A list of objects that should be lazily looked up. Currently, the only implementations for this are 'jobs' and 'subscribers'. If either are included in the list, they will be looked up on demand when needed by a referenced object. For example, if you are querying all jobs and subscribers is not set to lazy, all subscribers will first be retrieved. If lazy is enabled, subscriber orgs will only be retrieved when trying to resolve references for a particular job. Generally, if you have a lot of subscribers and only expect your script to need to lookup a small number of them, enabling lazy for subscribers will reduce api calls and cause the script to run faster.
+
+- **default_where**: A dictionary with Push Upgrade API objects as key and a value containing a SOQL WHERE clause statement which is applied to all queries against the object to effectively set the universe for a given object. For example:
default_where = {'PackageSubscriber': "OrgType = 'Sandbox'"}
In the example above, the wrapper would never return a PackageSubscriber which is not a Sandbox org.
@@ -24,22 +23,22 @@ In the example above, the wrapper would never return a PackageSubscriber which i
## Common Environment Variables
-The push scripts are all designed to receive their arguments via environment variables. The following are common amongst all of the Push Scripts
+The push scripts are all designed to receive their arguments via environment variables. The following are common amongst all of the Push Scripts
-* **SF_USERNAME**: The Salesforce username for the packaging org
-* **SF_PASSWORD**: The Salesforce password and security token for the packaging org
-* **SF_SERVERURL**: The login url for the Salesforce packaging org.
+- **SF_USERNAME**: The Salesforce username for the packaging org
+- **SF_PASSWORD**: The Salesforce password and security token for the packaging org
+- **SF_SERVERURL**: The login url for the Salesforce packaging org.
## get_version_id.py
-Takes a namespace and version string and looks up the given version. Returns the version's Salesforce Id.
+Takes a namespace and version string and looks up the given version. Returns the version's Salesforce Id.
The script handles parsing the version number string into a SOQL query against the MetadataPackageVersion object with the correct MajorVersion, MinorVersion, PatchVersion, ReleaseState, and BuildNumber (i.e. Beta number).
### Required Environment Variables
-* **NAMESPACE**: The Package's namespace prefix
-* **VERSION_NUMBER**: The version number string.
+- **NAMESPACE**: The Package's namespace prefix
+- **VERSION_NUMBER**: The version number string.
## orgs_for_push.py
@@ -47,13 +46,12 @@ Takes a MetadataPackageVersion Id and optionally a where clause to filter Subscr
### Required Environment Variables
-* **VERSION**: The MetadataPackageVersion Id of the version you want to push upgrade. This is used to look for all users not on the version or a newer version
+- **VERSION**: The MetadataPackageVersion Id of the version you want to push upgrade. This is used to look for all users not on the version or a newer version
### Optional Environment Variables
-* **SUBSCRIBER_WHERE**: An extra filter to be applied to all Subscriber queries. For example, setting this to OrgType = 'Sandbox' would find all Sandbox orgs eligible for push upgrade to the specified version
+- **SUBSCRIBER_WHERE**: An extra filter to be applied to all Subscriber queries. For example, setting this to OrgType = 'Sandbox' would find all Sandbox orgs eligible for push upgrade to the specified version
## failed_orgs_for_push.py
-Takes a PackagePushRequest Id and optionally a where clause to filter Subscribers and returns a list of OrgId's one per line for all orgs which failed the
-
+Takes a PackagePushRequest Id and optionally a where clause to filter Subscribers and returns a list of OrgId's one per line for all orgs which failed the
diff --git a/cumulusci/tasks/push/pushfails.py b/cumulusci/tasks/push/pushfails.py
index 0c5ce23582..74c257a712 100644
--- a/cumulusci/tasks/push/pushfails.py
+++ b/cumulusci/tasks/push/pushfails.py
@@ -1,4 +1,4 @@
-""" simple task(s) for reporting on push upgrade jobs.
+"""simple task(s) for reporting on push upgrade jobs.
this doesn't use the nearby push_api module, and was just a quick ccistyle
get the job done kinda moment.
diff --git a/cumulusci/tasks/push/tasks.py b/cumulusci/tasks/push/tasks.py
index ba0255f7c8..39259b7aa1 100644
--- a/cumulusci/tasks/push/tasks.py
+++ b/cumulusci/tasks/push/tasks.py
@@ -202,7 +202,6 @@ def _report_push_status(self, request_id):
class SchedulePushOrgList(BaseSalesforcePushTask):
-
task_options = {
"csv": {"description": "The path to a CSV file to read.", "required": False},
"csv_field_name": {
@@ -261,9 +260,9 @@ def _init_options(self, kwargs):
if "namespace" not in self.options:
self.options["namespace"] = self.project_config.project__package__namespace
if "metadata_package_id" not in self.options:
- self.options[
- "metadata_package_id"
- ] = self.project_config.project__package__metadata_package_id
+ self.options["metadata_package_id"] = (
+ self.project_config.project__package__metadata_package_id
+ )
if "batch_size" not in self.options:
self.options["batch_size"] = 200
if "csv" not in self.options and "csv_field_name" in self.options:
@@ -448,10 +447,10 @@ def _get_orgs(self):
for included_version in included_versions:
# Clear the get_subscribers method cache before each call
push_api.get_subscribers.cache_clear()
- push_api.default_where[
- "PackageSubscriber"
- ] = "{} AND MetadataPackageVersionId = '{}'".format(
- default_where["PackageSubscriber"], included_version
+ push_api.default_where["PackageSubscriber"] = (
+ "{} AND MetadataPackageVersionId = '{}'".format(
+ default_where["PackageSubscriber"], included_version
+ )
)
for subscriber in push_api.get_subscribers():
orgs.append(subscriber["OrgKey"])
@@ -464,10 +463,10 @@ def _get_orgs(self):
excluded_versions = [str(version.sf_id)]
for newer in newer_versions:
excluded_versions.append(str(newer.sf_id))
- push_api.default_where[
- "PackageSubscriber"
- ] += " AND MetadataPackageVersionId NOT IN {}".format(
- "('" + "','".join(excluded_versions) + "')"
+ push_api.default_where["PackageSubscriber"] += (
+ " AND MetadataPackageVersionId NOT IN {}".format(
+ "('" + "','".join(excluded_versions) + "')"
+ )
)
for subscriber in push_api.get_subscribers():
diff --git a/cumulusci/tasks/release_notes/README.md b/cumulusci/tasks/release_notes/README.md
index 9951b15460..4d2e5f50ec 100644
--- a/cumulusci/tasks/release_notes/README.md
+++ b/cumulusci/tasks/release_notes/README.md
@@ -13,25 +13,25 @@ Start the section with `# Critical Changes` followed by your content
For example:
This won't be included
-
+
# Critical Changes
-
+
This will be included in Critical Changes
-
+
## Changes
-The Changes section is where you should list off any changes worth highlight to users in the release notes. This section should always include instructions for users for any post-upgrade tasks they need to perform to enable new functionality. For example, users should be told to grant permissions and add new CustomFields to layouts.
+The Changes section is where you should list off any changes worth highlight to users in the release notes. This section should always include instructions for users for any post-upgrade tasks they need to perform to enable new functionality. For example, users should be told to grant permissions and add new CustomFields to layouts.
Start the section with `# Changes` followed by your content
For example:
This won't be included
-
+
# Changes
-
+
This will be included in Changes
-
+
## Issues Closed
The Issues Closed section is where you should link to any closed issues that should be listed in the release notes.
@@ -41,9 +41,9 @@ Start the section with `# Changes` followed by your content
For example:
This won't be included
-
+
# Issues Closed
-
+
Fixes #102
resolves #100
This release closes #101
@@ -55,9 +55,9 @@ Would output:
#100: Title of Issue 100
#101: Title of Issue 101
#102: Title of Issue 102
-
+
A few notes about how issues are parsed:
-* The parser uses the same format as Github: https://help.github.com/articles/closing-issues-via-commit-messages/
-* The parser searches for all issue numbers and sorts them by their integer value, looks up their title, and outputs a formatted line with the issue number and title for each issue.
-* The parser ignores everything else in the line that is not an issue number. Anything that is not an issue number will not appear in the rendered release notes
+- The parser uses the same format as Github: https://help.github.com/articles/closing-issues-via-commit-messages/
+- The parser searches for all issue numbers and sorts them by their integer value, looks up their title, and outputs a formatted line with the issue number and title for each issue.
+- The parser ignores everything else in the line that is not an issue number. Anything that is not an issue number will not appear in the rendered release notes
diff --git a/cumulusci/tasks/release_notes/generator.py b/cumulusci/tasks/release_notes/generator.py
index 615665df69..79f4f32b6d 100644
--- a/cumulusci/tasks/release_notes/generator.py
+++ b/cumulusci/tasks/release_notes/generator.py
@@ -254,7 +254,6 @@ def _update_release_content(self, release, content):
# update existing sections
for line in release.body.splitlines():
-
if current_parser:
if current_parser._is_end_line(current_parser._process_line(line)):
parser_content = current_parser.render(
diff --git a/cumulusci/tasks/release_notes/parser.py b/cumulusci/tasks/release_notes/parser.py
index 1fef5d4e16..052a302bdc 100644
--- a/cumulusci/tasks/release_notes/parser.py
+++ b/cumulusci/tasks/release_notes/parser.py
@@ -60,7 +60,6 @@ def parse(self, change_note):
# Add all content once in the section
if self._in_section:
-
# End when the end of section is found
if self._is_end_line(line):
self._in_section = False
diff --git a/cumulusci/tasks/release_notes/task.py b/cumulusci/tasks/release_notes/task.py
index 8d56b20776..54eb80221f 100644
--- a/cumulusci/tasks/release_notes/task.py
+++ b/cumulusci/tasks/release_notes/task.py
@@ -14,7 +14,6 @@
class AllGithubReleaseNotes(BaseGithubTask):
-
task_options = {
"repos": {
"description": (
@@ -36,10 +35,10 @@ def _run_task(self):
.body
)
table_of_contents += (
- f"""