Giter Site home page Giter Site logo

adaptavist's People

Contributors

2fake avatar bong1991 avatar dependabot[bot] avatar kanob avatar manefix avatar shrikantsingh0585 avatar shutgun avatar ststeinberg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

adaptavist's Issues

Adaptavist.get_users method received only 1000 users because of limitations of JiraRest Api

Adaptavist.get_users method received only 1000 users because of limitations of JiraRest Api
From Jira docs :
This operation first applies a filter to match the search string and property, and then takes the filtered users in the range defined by startAt and maxResults, up to the thousandth user. To get all the users who match the search string and property, use Get all users and filter the records in your code.

Add possibility to skip SSL verification

Please, add possibiilty to init Adaptavist client with verify=False for requests methods. For API in our Jira there was changed the way of authentication and server now yells about SSL certificate errors. While request module has verify argument for it's HTTP requests method, we cannot override adaptavist client methods call with adding verify=False. Suggestion is to add ssl_verification field to init method (with default value True).

Potential dependency conflicts between adaptavist and requests

Hi, as shown in the following full dependency graph of adaptavist, adaptavist requires requests (the latest version), while the installed version of requests-toolbelt(0.9.1) requires requests>=2.0.1,<3.0.0.

According to Pip's “first found wins” installation strategy, requests 2.23.0 is the actually installed version.

Although the first found package version requests 2.23.0 just satisfies the later dependency constraint (requests>=2.0.1,<3.0.0), it will lead to a build failure once developers release a newer version of requests.

Dependency tree--------

adaptavist - 1.1.0
| +- requests(install version:2.23.0 version range:*)
| | +- certifi(install version:2020.4.5.1 version range:>=2017.4.17)
| | +- chardet(install version:3.0.4 version range:>=3.0.2,<4)
| | +- idna(install version:2.9 version range:>=2.5,<3)
| | +- urllib3(install version:1.25.9 version range:>=1.21.1,<1.26)
| +- requests-toolbelt(install version:0.9.1 version range:*)
| | +- requests(install version:2.23.0 version range:>=2.0.1,<3.0.0)
| | | +- certifi(install version:2020.4.5.1 version range:>=2017.4.17)
| | | +- chardet(install version:3.0.4 version range:>=3.0.2,<4)
| | | +- idna(install version:2.9 version range:>=2.5,<3)
| | | +- urllib3(install version:1.25.9 version range:>=1.21.1,<1.26)

Thanks for your attention.
Best,
Neolith

add test_result_id to edit_test_result_status

I use the jenkins junit plugin for tm4j. The report structure does not have environment or comments, so I have to edit the test result status with the status from the original junit report.
The problem is when the same test is run multiple times (data driven). If I edit the test result status only by test_run_key and test_case_key, it keeps editing the same test result and leaves out the rest.
I need to include the test_result_id to this method so I can edit the correct test result.

adaptavist.get_test_results() can't sort by 'index'

Hello,

For some reason our adaptavist plugin in our server JIRA instance does not return index key in the get_test_results endpoint
"{self._adaptavist_api_url}/testrun/{test_run_key}/testresults"

So, when sort is attempted in newer version of adaptavist plugin (> 2.0.0) it fails because index key does not exist
result["scriptResults"] = sorted(result["scriptResults"], key=lambda result: result["index"])

`
def get_test_results(self, test_run_key: str) -> List[Dict[str, Any]]:
"""
Get all test results for a given test run.

      :param test_run_key: Test run key of the result to be updated. ex. "JQA-R1234"
      :returns: Test results
      """
      request_url = f"{self._adaptavist_api_url}/testrun/{test_run_key}/testresults"
      self._logger.debug("Getting all test results for run %s", test_run_key)
      request = self._get(request_url)
      if not request:
          return []
      results = request.json()

      for result in results:
          result["scriptResults"] = sorted(result["scriptResults"], key=lambda result: result["index"])
      return results

`

"create_test_run" method: cycle is created and is assigned to test plan, BUT test cases are not assigned to created test cycle

The issue connected with key and id for creating cycle: #5

but about "create_test_run" method:

resp data:
{'projectId': 11101, 'testPlanId': 165, 'name': 'Cycle: SMADM-08/11/2020 12:40:10', 'folderId': None, 'issueKey': None, 'items': [{'testCaseKey': 'SMADM-T36', 'environment': None, 'executedBy': 'dima', 'assignedTo': 'dima'}], 'plannedEndDate': '2020-11-08T12:40:10.915Z', 'plannedStartDate': '2020-11-08T12:40:10.915Z'}

after that - test plan is created from another method, test cycle is created and is linked to test plan BUT test cases 'SMADM-T36', is not assigned to test cycle

response from this method: {'testRunItems': [], 'id': 1620, 'key': 'SMADM-C64'}

Unable to add attachments to test execution when test run created by using clone_test_run API

Used clone_test_run api for cloning a reference test run(test cycle). Then I used edit_test_result_status api for updating the test results. For adding attachments to executed test, when I tried to use add_test_result_attachment api, I get KeyError.
However if the test run (test cycle) created manually, I am able to add attachments to test execution using same "add_test_result_attachment" API.

Creation of Test Case

Can someone help me with the creation of Test Case. I don't see such a method/Function. Is this yet to be implemented or Ain't I landing on correct documentation ? I could see other methods/Functions such as create Test Runs, Test results etc etc but not Test case.

most of requests: via *_key parameter - doesn;t work, need to use id

most of requests: via *_key parameter - doesn;t work, need to use id

for example:

`def create_test_run(self, project_key, test_run_name, **kwargs):
""

    Create a new test run.

    :param project_key: project key of the test run ex. "TEST"
    :type project_key: str
    :param test_run_name: name of the test run to be created
    :type test_run_name: str

    :param kwargs: Arbitrary list of keyword arguments
            folder: name of the folder where to create the new test run
            issue_key: issue key to link this test run to
            test_plan_key: test plan key to link this test run to
            test_cases: list of test case keys to be linked to the test run ex. ["TEST-T1026","TEST-T1027"]
            environment: environment to distinguish multiple executions (call get_environments() to get a list of available ones)

    :return: key of the test run created
    :rtype: str
    """
    self.logger.debug("create_test_run(\"%s\", \"%s\")", project_key, test_run_name)

    folder = kwargs.pop("folder", None)
    issue_key = kwargs.pop("issue_key", None)
    test_plan_key = kwargs.pop("test_plan_key", None)
    test_cases = kwargs.pop("test_cases", [])
    environment = kwargs.pop("environment", None)

    assert not kwargs, "Unknown arguments: %r" % kwargs

    folder = ("/" + folder).replace("//", "/") if folder else folder or None
    if folder and folder not in self.get_folders(project_key=project_key, folder_type="TEST_RUN"):
        self.create_folder(project_key=project_key, folder_type="TEST_RUN", folder_name=folder)

    test_cases_list_of_dicts = []
    for test_case_key in test_cases:
        test_cases_list_of_dicts.append({"testCaseKey": test_case_key, "environment": environment, "executedBy": get_executor(), "assignedTo": get_executor()})

    request_url = self.adaptavist_api_url + "/testrun"

    request_data = {"projectKey": project_key,
                    "testPlanKey": test_plan_key,
                    "name": test_run_name,
                    "folder": folder,
                    "issueKey": issue_key,
                    "items": test_cases_list_of_dicts}

    try:
        request = requests.post(request_url,
                                auth=self.authentication,
                                headers=self.headers,
                                data=json.dumps(request_data))
        request.raise_for_status()
    except HTTPError as ex:
        # HttpPost: in case of status 400 request.text contains error messages
        self.logger.error("request failed. %s %s", ex, request.text)
        return None
    except (requests.exceptions.ConnectionError, requests.exceptions.RequestException) as ex:
        self.logger.error("request failed. %s", ex)
        return None

    response = request.json()

    return response["key"]`

with this request_data request doesn't work, need to send with this data:

` def create_test_run(self, project_key, project_id, test_run_name, **kwargs):
"""
Create a new test run.

    :param project_key: project key of the test run ex. "TEST"
    :type project_key: str
    :param project_id: project id of the test run ex. "TEST"
    :type project_id: str
    :param test_run_name: name of the test run to be created
    :type test_run_name: str

    :param kwargs: Arbitrary list of keyword arguments
            folder: name of the folder where to create the new test run
            issue_key: issue key to link this test run to
            test_plan_key: test plan key to link this test run to
            test_cases: list of test case keys to be linked to the test run ex. ["TEST-T1026","TEST-T1027"]
            environment: environment to distinguish multiple executions (call get_environments() to get a list of
            available ones)

    :return: key of the test run created
    :rtype: str
    """
    self.logger.debug("create_test_run(\"%s\", \"%s\", \"%s\")", project_key, project_id, test_run_name)

    folder = kwargs.pop("folder", None)
    issue_key = kwargs.pop("issue_key", None)
    test_plan_key = kwargs.pop("test_plan_key", None)
    test_plan_id = kwargs.pop("test_plan_id", None)
    test_cases = kwargs.pop("test_cases", [])
    environment = kwargs.pop("environment", None)
    time = kwargs.pop("time", None)

    assert not kwargs, "Unknown arguments: %r" % kwargs

    folder_name = ("/" + folder).replace("//", "/") if folder else folder or None
    folder_id = self.create_folder_plan(project_id=project_id, project_key=project_key, folder_type="TEST_PLAN",
                                        folder_name=folder_name)

    test_cases_list_of_dicts = []
    for test_case_key in test_cases:
        test_cases_list_of_dicts.append({"testCaseKey": test_case_key, "environment": environment, "executedBy": get_executor(), "assignedTo": get_executor()})

    request_url = self.adaptavist_api_url + "/testrun"

    request_data = {"projectId": project_id,
                    "testPlanId": test_plan_id,
                    "name": test_run_name,
                    "folderId": folder_id if folder_id else None,
                    "issueKey": issue_key,
                    "items": test_cases_list_of_dicts,
                    "plannedEndDate": time,
                    "plannedStartDate": time,
                    }

    try:
        request = requests.post(request_url,
                                auth=self.authentication,
                                headers=self.headers,
                                data=json.dumps(request_data),
                                verify=False)
        request.raise_for_status()
    except HTTPError as ex:
        # HttpPost: in case of status 400 request.text contains error messages
        self.logger.error("request failed. %s %s", ex, request.text)
        return None
    except (requests.exceptions.ConnectionError, requests.exceptions.RequestException) as ex:
        self.logger.error("request failed. %s", ex)
        return None

    response = request.json()
    self.lint_test_case(test_cases)

    return response["key"], response["id"]`

the same for OTHER REQUESTS, where for some need to use key or ID

so for now this library is not equal.
If you need other data - maybe we have old version TM4J server plugin

get_test_result() does not return latest execution result

get_test_result() is not returning the last execution result.
This may cause issue when editing, for example: when adding an file using add_test_result_attachment(), you sometimes update a test execution that was not necessarily the last one

Sample data that I created, in the comment section I added notes for order execution as displayed in JIRA
https://pastebin.com/emVwexGg

Test run is not getting created when test case list is provided

Failed to create test run when test cases list is passed.
Getting below error -
request failed. 500 Server Error: Internal Server Error for url: https://jira.**.*****.com.**//rest/atm/1.0/testrun {"message":"Unrecognized field "executedBy" (Class com.kanoah.testmanager.service.model.TestResultDTO), not marked as ignorable\n at [Source: org.apache.catalina.connector.CoyoteInputStream@67e7d629; line: 1, column: 205] ............

Code -
jira_atm.create_test_run(project_key="Test", test_run_name="New", folder="New", test_cases=["Test-T1"])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.