Skip to content

Tags: parameterIT/tool

Tags

v1.0

Toggle v1.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
changed modu to core (#114)

* changed modu to core

* format

---------

Co-authored-by: FrederikRothe <[email protected]>

v0.1.5

Toggle v0.1.5's commit message
End of Week 5

We decoupled models from the byqom moddel. We achieve this, by creating
a new, separate model module and importing python files at runtime using
`importlib`.

An issue we encountered is that a quality model's results are not
complete until both the metric measurement step and the aggregation
step. However, the aggregation step depends on the metric step. Thus, we
created `runner.py` to handle execution of these two steps correctly,
packing them into the appearance of a single operation.

We started using `click` to quickly develop a CLI so that, as the
program grows in complexity of options it becomes easier to specify what
functionalty to execute.

We researched and worked on implementing a "plugin architecture" for
byoqm. One approach was to use `pluggy` which we experiment with in
another (private) repository. However, we found that as we loosen the
interface requirements for a metric (which is a plugin), determining the
exact interface at runtime and abiding to it becomes tricky.

An alternative approach is inspired by QGIS. Here, each metric is given
an object containing all the information made available by byoqm (e.g.
the parsed AST, the source files that are being inspected, the current
language) and they can select information from this object as they need.
This solves the issue of determining interfaces at runtime, but has the
drawback of requiring oversight on the information and functionality
included in this object on our behalf.

We reintroduced the already written tests because we were lacking
confidence that we wrote actually works.

To support more languages we rely on tree-sitter. Therefore, we have
worked on implementing all metrics using tree-sitter queries, as this
allows for a language agnostic representation of the metric's
functionaliy.

v0.1.4

Toggle v0.1.4's commit message
End of Week 4

After discussion we found that metrics are too tightly coupled to the
quality model. Since metrics are leaves of the quality model, they can
only be executed through a reference to the quality model. This is
problematic because it can, for example, complicate seperating the
collecting the measurements themselves and a a description of the
desired measurements.

For this reason we have simplified the metric abstraction to an
executable that produces a number on STDOUT. The quality model can then
reference these executables to describe the desired measurements without
actually coupling the functionality of the metric to the quality model.
A quality model is just a python dictionary referencing these
execuaables and some aggregation functions that read and aggregate the
measurements from a .csv file.

While not fully merged, the proposed changes can be seen in PR #29.

We have continued work on implementing metrics based on Code Climate.

We have degraded to version ^3.10 of python because it was causing
issues with the tree-sitter dependency for some group member, and we
wanted to remain productive over sinking hours into fixing the issue in
a way that keeps version 3.11 as a requirement.

v0.1.3

Toggle v0.1.3's commit message
End of Week 3

Implement the `quality_model` abstraction as a `tree_quality_model` that
follows a heirarchial decomposition of a quality model, because that is
what we found in a lot of literature. In this tree-like structure
metrics are the leaves.

Implemented some metrics using the `Metric` interface.

Added tree-sitter as a dependency because it will enable us to write
metrics that work for multiple programming languages from the grammars
that it provides.

GitHub actions now also use poetry for setup so that they are more like
the local development enviornment.

v0.1.2

Toggle v0.1.2's commit message
End of Week 2

We chose APGL V3 as a license with the intent of keeping work on this
project as open as possible. This is motivated by the desire to make how
the program works transparent.

Add a PR template and a GitHub action to run tests continuosly to make
PRs and testing of code more consistent.

Design three core abstractions:
- `metrics`
- `parser`
- `quality_model`

The idea is to enable parameterization using an OOP structure. Users can
implement their own `metrics` by implmenting the `metrics` abstraction.
A `parser` generate a `quality_model` from some user input. The idea is
to write a YAML parser that lets users declare `quality_model`.
The `quality_model` contains the `metrics` somehow.

v0.1.1

Toggle v0.1.1's commit message
End of Week 1

This week we setup poetry as a build system to manage dependencies, an
alternative would have been pip. However, we were advised to use poetry.

Use the black formatter to make code format consistent and a test that
will contiously run tests.