Skip to content

Commit

Permalink
Updating test scenarios
Browse files Browse the repository at this point in the history
  • Loading branch information
anibalinn committed Oct 15, 2024
1 parent e9b9f9e commit 6852926
Show file tree
Hide file tree
Showing 9 changed files with 114 additions and 34 deletions.
6 changes: 4 additions & 2 deletions tests/features/crashing_tests.feature
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@ Feature: Crashing Tests
@CRASHING
Scenario: Crashing tests with parallel processes and parallel scheme set as "scenario" should be reported
Given I have installed behavex
When I run the behavex command with a crashing test with "2" parallel processes and parallel scheme set as "scenario"
When I setup the behavex command with "2" parallel processes and parallel scheme set as "scenario"
And I run the behavex command with a crashing test
Then I should see the following behavex console outputs and exit code "1"
| output_line |
| Exit code: 1 |
Expand All @@ -23,7 +24,8 @@ Feature: Crashing Tests
@CRASHING
Scenario: Crashing tests with parallel processes and parallel scheme set as "feature" should be reported
Given I have installed behavex
When I run the behavex command with a crashing test with "2" parallel processes and parallel scheme set as "feature"
When I setup the behavex command with "2" parallel processes and parallel scheme set as "feature"
And I run the behavex command with a crashing test
Then I should see the following behavex console outputs and exit code "1"
| output_line |
| Exit code: 1 |
Expand Down
1 change: 1 addition & 0 deletions tests/features/failing_scenarios.feature
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,4 @@ Feature: Failing Scenarios
| 0 scenarios passed, 1 failed, 0 skipped |
| Exit code: 1 |
And I should not see exception messages in the output
And I should see the same number of scenarios in the reports and the console output
2 changes: 2 additions & 0 deletions tests/features/parallel_executions.feature
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ Feature: Parallel executions
| PARALLEL_SCHEME \| <parallel_scheme> |
| Exit code: 1 |
And I should not see error messages in the output
And I should see the same number of scenarios in the reports and the console output
Examples:
| parallel_scheme | parallel_processes |
| scenario | 3 |
Expand All @@ -29,6 +30,7 @@ Feature: Parallel executions
| Exit code: 0 |
| 1 scenario passed, 0 failed |
And I should not see error messages in the output
And I should see the same number of scenarios in the reports and the console output
Examples:
| parallel_scheme | parallel_processes | tags |
| scenario | 3 | -t=@PASSING_TAG_3 -t=@PASSING_TAG_3_1 |
Expand Down
4 changes: 3 additions & 1 deletion tests/features/passing_scenarios.feature
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Feature: Passing Scenarios
| scenarios passed, 0 failed, 0 skipped |
| Exit code: 0 |
And I should not see error messages in the output

And I should see the same number of scenarios in the reports and the console output

@PASSING
Scenario: Passing tests with AND tags
Expand All @@ -22,6 +22,7 @@ Feature: Passing Scenarios
| 1 scenario passed, 0 failed |
| Exit code: 0 |
And I should not see error messages in the output
And I should see the same number of scenarios in the reports

@PASSING @WIP
Scenario: Passing tests with NOT tags
Expand All @@ -34,3 +35,4 @@ Feature: Passing Scenarios
| 1 scenario passed, 0 failed |
| Exit code: 0 |
And I should not see error messages in the output
And I should see the same number of scenarios in the reports and the console output
2 changes: 1 addition & 1 deletion tests/features/progress_bar.feature
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Feature: Progress Bar

Background:
Given I have installed behavex
And I have the progress bar enabled
And The progress bar is enabled

@PROGRESS_BAR @PARALLEL
Scenario Outline: Progress bar should be shown when running tests in parallel
Expand Down
2 changes: 2 additions & 0 deletions tests/features/renaming_scenarios.feature
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ Feature: Renaming Scenarios
| scenarios passed, 0 failed, 0 skipped |
| Exit code: 0 |
And I should not see error messages in the output
And I should see the same number of scenarios in the reports and the console output

@RENAME
Scenario Outline: Renaming scenarios and features in parallel by <parallel_scheme> scheme
Expand All @@ -20,6 +21,7 @@ Feature: Renaming Scenarios
| scenarios passed, 0 failed, 0 skipped |
| Exit code: 0 |
And I should not see error messages in the output
And I should see the same number of scenarios in the reports and the console output
Examples:
| parallel_scheme | parallel_processes |
| scenario | 3 |
Expand Down
19 changes: 10 additions & 9 deletions tests/features/secondary_features/steps/secondary_steps.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,12 +46,13 @@ def step_impl(context):
# This step will be skipped
pass

@given('I rename the scenario from context to have the suffix "{suffix}"')
def step_impl(context, suffix):
context.new_scenario_name = context.scenario.name + suffix
logging.info('I rename the scenario from \n"{}" \nto \n"{}"'.format(context.scenario.name, context.new_scenario_name))

@given('I rename the feature from context to have the suffix "{suffix}"')
def step_impl(context, suffix):
context.new_feature_name = context.feature.name + suffix
logging.info('I rename the feature from \n"{}" \nto \n"{}"'.format(context.feature.name, context.new_feature_name))
@given('I rename the {feature_or_scenario} from context to have the suffix "{suffix}"')
def step_impl(context, feature_or_scenario, suffix):
if feature_or_scenario == 'feature':
context.new_feature_name = context.feature.name + suffix
logging.info('I rename the feature from \n"{}" \nto \n"{}"'.format(context.feature.name, context.new_feature_name))
elif feature_or_scenario == 'scenario':
context.new_scenario_name = context.scenario.name + suffix
logging.info('I rename the scenario from \n"{}" \nto \n"{}"'.format(context.scenario.name, context.new_scenario_name))
else:
raise ValueError('Invalid element, it should be "feature" or "scenario"')
1 change: 1 addition & 0 deletions tests/features/skipped_scenarios.feature
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,4 @@ Feature: Skipped Scenarios
| 0 scenarios passed, 0 failed, 1 skipped |
| Exit code: 0
And I should not see error messages in the output
And I should see the same number of scenarios in the reports and the console output
111 changes: 90 additions & 21 deletions tests/features/steps/execution_steps.py
Original file line number Diff line number Diff line change
@@ -1,64 +1,69 @@
import logging
import os
import random
import re
import subprocess

from behave import given, then, when

root_project_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
tests_features_path = os.path.join(root_project_path, 'tests', 'features')

@given('I have the progress bar enabled')

@given('The progress bar is enabled')
def step_impl(context):
context.progress_bar = True


@when('I run the behavex command with a passing test')
@when('I run the behavex command with passing tests')
def step_impl(context):
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/passing_tests.feature'), '-o', 'output/output_{}'.format(get_random_number(6))]
context.output_path = 'output/output_{}'.format(get_random_number(6))
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/passing_tests.feature'), '-o', context.output_path]
execute_command(context, execution_args)


@when('I run the behavex command that renames scenarios and features')
def step_impl(context):
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/rename_tests.feature'), '-o', 'output/output_{}'.format(get_random_number(6))]
if hasattr(context, 'parallel_processes'):
execution_args += ['--parallel-processes', context.parallel_processes]
if hasattr(context, 'parallel_scheme'):
execution_args += ['--parallel-scheme', context.parallel_scheme]
context.output_path = 'output/output_{}'.format(get_random_number(6))
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/rename_tests.feature'), '-o', context.output_path]
execute_command(context, execution_args)


@when('I run the behavex command with a failing test')
def step_impl(context):
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/failing_tests.feature'), '-o', 'output/output_{}'.format(get_random_number(6))]
context.output_path = 'output/output_{}'.format(get_random_number(6))
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/failing_tests.feature'), '-o', context.output_path]
execute_command(context, execution_args)


@when('I run the behavex command with a crashing test')
@when('I run the behavex command with a crashing test with "{parallel_processes}" parallel processes and parallel scheme set as "{parallel_scheme}"')
def step_impl(context, parallel_processes="1", parallel_scheme='scenario'):
context.output_path = 'output/output_{}'.format(get_random_number(6))
execution_args = ['behavex',
os.path.join(tests_features_path, os.path.join(tests_features_path, 'crashing_features/crashing_tests.feature')),
'-o', 'output/output_{}'.format(get_random_number(6)),
'--parallel-processes', parallel_processes,
'--parallel-scheme', parallel_scheme]
'-o', context.output_path]
execute_command(context, execution_args)


@when('I run the behavex command with a skipped test')
def step_impl(context):
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/skipped_tests.feature'), '-o', 'output/output_{}'.format(get_random_number(6))]
context.output_path = 'output/output_{}'.format(get_random_number(6))
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/skipped_tests.feature'), '-o', context.output_path]
execute_command(context, execution_args)


@when('I run the behavex command with an untested test')
def step_impl(context):
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/untested_tests.feature'), '-o', 'output/output_{}'.format(get_random_number(6))]
context.output_path = 'output/output_{}'.format(get_random_number(6))
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/untested_tests.feature'), '-o', context.output_path]
execute_command(context, execution_args)


@when('I run the behavex command with "{parallel_processes}" parallel processes and parallel scheme set as "{parallel_schema}"')
def step_impl(context, parallel_processes, parallel_schema):
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/'), '-o', 'output/output_{}'.format(get_random_number(6)), '--parallel-processes', parallel_processes, '--parallel-scheme', parallel_schema]
context.output_path = 'output/output_{}'.format(get_random_number(6))
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/'), '-o', context.output_path, '--parallel-processes', parallel_processes, '--parallel-scheme', parallel_schema]
execute_command(context, execution_args)


Expand All @@ -73,9 +78,10 @@ def step_impl(context):
scheme = context.table[0]['parallel_scheme']
processes = context.table[0]['parallel_processes']
tags = context.table[0]['tags']
context.output_path = 'output/output_{}'.format(get_random_number(6))
tags_to_folder_name = get_tags_string(tags)
tags_array = get_tags_arguments(tags)
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/'), '-o', 'output/output_{}'.format(get_random_number(6)), '--parallel-processes', processes, '--parallel-scheme', scheme] + tags_array
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/'), '-o', context.output_path, '--parallel-processes', processes, '--parallel-scheme', scheme] + tags_array
execute_command(context, execution_args)


Expand All @@ -84,14 +90,16 @@ def step_impl(context):
tags = context.table[0]['tags']
tags_to_folder_name = get_tags_string(tags)
tags_array = get_tags_arguments(tags)
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/'), '-o', 'output/output_{}'.format(get_random_number(6))] + tags_array
context.output_path = 'output/output_{}'.format(get_random_number(6))
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/'), '-o', context.output_path] + tags_array
execute_command(context, execution_args)


@when('I run the behavex command by performing a dry run')
def step_impl(context):
# generate a random number between 1 and 1000000 completing with zeroes to 6 digits
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/'), '-o', 'output/output_{}'.format(get_random_number(6)), '--dry-run']
context.output_path = 'output/output_{}'.format(get_random_number(6))
execution_args = ['behavex', os.path.join(tests_features_path, 'secondary_features/'), '-o', context.output_path, '--dry-run']
execute_command(context, execution_args)


Expand All @@ -110,28 +118,89 @@ def step_impl(context):
for message in error_messages:
assert message not in context.result.stdout.lower(), f"Unexpected output: {context.result.stdout}"


@then('I should not see exception messages in the output')
def step_impl(context):
exception_messages = ["exception", "traceback"]
for message in exception_messages:
assert message not in context.result.stdout.lower(), f"Unexpected output: {context.result.stdout}"


@then('I should see the same number of scenarios in the reports and the console output')
def step_impl(context):
total_scenarios_in_html_report = get_total_scenarios_in_html_report(context)
logging.info(f"Total scenarios in the HTML report: {total_scenarios_in_html_report}")
total_scenarios_in_junit_reports = get_total_scenarios_in_junit_reports(context)
logging.info(f"Total scenarios in the JUnit reports: {total_scenarios_in_junit_reports}")
total_scenarios_in_console_output = get_total_scenarios_in_console_output(context)
logging.info(f"Total scenarios in the console output: {total_scenarios_in_console_output}")
assert total_scenarios_in_html_report == total_scenarios_in_junit_reports == total_scenarios_in_console_output, f"Expected {total_scenarios} scenarios in the reports and the console output, but found {total_scenarios_in_html_report} in the HTML report, {total_scenarios_in_junit_reports} in the JUnit reports, and {total_scenarios_in_console} in the console output"


@then('I should see the same number of scenarios in the reports')
def step_impl(context):
total_scenarios_in_html_report = get_total_scenarios_in_html_report(context)
logging.info(f"Total scenarios in the HTML report: {total_scenarios_in_html_report}")
total_scenarios_in_junit_reports = get_total_scenarios_in_junit_reports(context)
logging.info(f"Total scenarios in the JUnit reports: {total_scenarios_in_junit_reports}")
assert total_scenarios_in_html_report == total_scenarios_in_junit_reports, f"Expected {total_scenarios} scenarios in the reports, but found {total_scenarios_in_html_report} in the HTML report, {total_scenarios_in_junit_reports} in the JUnit reports"


def get_tags_arguments(tags):
tags_array = []
for tag in tags.split(' '):
tags_array += tag.split('=')
return tags_array


def get_tags_string(tags):
return tags.replace('-t=','_AND_').replace('~','NOT_').replace(',','_OR_').replace(' ','').replace('@','')


def get_random_number(total_digits):
return str(random.randint(1, 1000000)).zfill(total_digits)

def execute_command(context, command, print_output=True):

def get_total_scenarios_in_console_output(context):
#Verifying the scenarios in the console output
console_output = context.result.stdout
# Extract the number of scenarios by analyzing the following pattern: X scenarios passed, Y failed, Z skipped
scenario_pattern = re.compile(r'(\d+) scenario.? passed, (\d+) failed, (\d+) skipped')
match = scenario_pattern.search(console_output)
if match:
scenarios_passed = int(match.group(1))
scenarios_failed = int(match.group(2))
scenarios_skipped = int(match.group(3))
else:
raise ValueError("No scenarios found in the console output")
return scenarios_passed + scenarios_failed + scenarios_skipped


def get_total_scenarios_in_html_report(context):
report_path = os.path.abspath(os.path.join(context.output_path, 'report.html'))
with open(report_path, 'r') as file:
html_content = file.read()
return html_content.count('data-scenario-tags=')


def get_total_scenarios_in_junit_reports(context):
junit_folder = os.path.abspath(os.path.join(context.output_path, 'behave'))
total_scenarios_in_junit_reports = 0
for file in os.listdir(junit_folder):
if file.endswith('.xml'):
with open(os.path.join(junit_folder, file), 'r') as file:
xml_content = file.read()
total_scenarios_in_junit_reports += xml_content.count('<testcase')
return total_scenarios_in_junit_reports


def execute_command(context, execution_args, print_output=True):
if "progress_bar" in context and context.progress_bar:
command.insert(2, '--show-progress-bar')
context.result = subprocess.run(command, capture_output=True, text=True)
execution_args.insert(2, '--show-progress-bar')
if hasattr(context, 'parallel_processes'):
execution_args += ['--parallel-processes', context.parallel_processes]
if hasattr(context, 'parallel_scheme'):
execution_args += ['--parallel-scheme', context.parallel_scheme]
context.result = subprocess.run(execution_args, capture_output=True, text=True)
if print_output:
logging.info(context.result.stdout)

0 comments on commit 6852926

Please sign in to comment.