A Coding Implementation of Advanced PyTest to Build Customized and Automated Testing with Plugins, Fixtures, and JSON Reporting

A Coding Implementation of Advanced PyTest to Build Customized and Automated Testing with Plugins, Fixtures, and JSON Reporting

In this tutorial, we explore the advanced capabilities of PyTest, one of the most powerful testing frameworks in Python. We build a complete mini-project from scratch that demonstrates fixtures, markers, plugins, parameterization, and custom configuration. We focus on showing how PyTest can evolve from a simple test runner into a robust, extensible system for real-world applications. By the end, we understand not just how to write tests, but how to control and customize PyTest’s behavior to fit any project’s needs. Check out the FULL CODES here.

import sys, subprocess, os, textwrap, pathlib, json


subprocess.run([sys.executable, "-m", "pip", "install", "-q", "pytest>=8.0"], check=True)


root = pathlib.Path("pytest_advanced_tutorial").absolute()
if root.exists():
   import shutil; shutil.rmtree(root)
(root / "calc").mkdir(parents=True)
(root / "app").mkdir()
(root / "tests").mkdir()

We begin by setting up our environment, importing essential Python libraries for file handling and subprocess execution. We install the latest version of PyTest to ensure compatibility and then create a clean project structure with folders for our main code, application modules, and tests. This gives us a solid foundation to organize everything neatly before writing any test logic. Check out the FULL CODES here.

(root / "pytest.ini").write_text(textwrap.dedent("""
[pytest]
addopts = -q -ra --maxfail=1 -m "not slow"
testpaths = tests
markers =
   slow: slow tests (use --runslow to run)
   io: tests hitting the file system
   api: tests patching external calls
""").strip()+"n")


(root / "conftest.py").write_text(textwrap.dedent(r'''
import os, time, pytest, json
def pytest_addoption(parser):
   parser.addoption("--runslow", action="store_true", help="run slow tests")
def pytest_configure(config):
   config.addinivalue_line("markers", "slow: slow tests")
   config._summary = {"passed":0,"failed":0,"skipped":0,"slow_ran":0}
def pytest_collection_modifyitems(config, items):
   if config.getoption("--runslow"):
       return
   skip = pytest.mark.skip(reason="need --runslow to run")
   for item in items:
       if "slow" in item.keywords: item.add_marker(skip)
def pytest_runtest_logreport(report):
   cfg = report.config._summary
   if report.when=="call":
       if report.passed: cfg["passed"]+=1
       elif report.failed: cfg["failed"]+=1
       elif report.skipped: cfg["skipped"]+=1
       if "slow" in report.keywords and report.passed: cfg["slow_ran"]+=1
def pytest_terminal_summary(terminalreporter, exitstatus, config):
   s=config._summary
   terminalreporter.write_sep("=", "SESSION SUMMARY (custom plugin)")
   terminalreporter.write_line(f"Passed: {s['passed']} | Failed: {s['failed']} | Skipped: {s['skipped']}")
   terminalreporter.write_line(f"Slow tests run: {s['slow_ran']}")
   terminalreporter.write_line("PyTest finished successfully ✅" if s["failed"]==0 else "Some tests failed ❌")


@pytest.fixture(scope="session")
def settings(): return {"env":"prod","max_retries":2}
@pytest.fixture(scope="function")
def event_log(): logs=[]; yield logs; print("\nEVENT LOG:", logs)
@pytest.fixture
def temp_json_file(tmp_path):
   p=tmp_path/"data.json"; p.write_text('{"msg":"hi"}'); return p
@pytest.fixture
def fake_clock(monkeypatch):
   t={"now":1000.0}; monkeypatch.setattr(time,"time",lambda: t["now"]); return t
'''))

We now create our PyTest configuration and plugin files. In pytest.ini, we define markers, default options, and test paths to control how tests are discovered and filtered. In conftest.py, we implement a custom plugin that tracks passed, failed, and skipped tests, adds a –runslow option, and provides fixtures for reusable test resources. This helps us extend PyTest’s core behavior while keeping our setup clean and modular. Check out the FULL CODES here.

(root/"calc"/"__init__.py").write_text(textwrap.dedent('''
from .vector import Vector
def add(a,b): return a+b
def div(a,b):
   if b==0: raise ZeroDivisionError("division by zero")
   return a/b
def moving_avg(xs,k):
   if k<=0 or k>len(xs): raise ValueError("bad window")
   out=[]; s=sum(xs[:k]); out.append(s/k)
   for i in range(k,len(xs)):
       s+=xs[i]-xs[i-k]; out.append(s/k)
   return out
'''))


(root/"calc"/"vector.py").write_text(textwrap.dedent('''
class Vector:
   __slots__=("x","y","z")
   def __init__(self,x=0,y=0,z=0): self.x,self.y,self.z=float(x),float(y),float(z)
   def __add__(self,o): return Vector(self.x+o.x,self.y+o.y,self.z+o.z)
   def __sub__(self,o): return Vector(self.x-o.x,self.y-o.y,self.z-o.z)
   def __mul__(self,s): return Vector(self.x*s,self.y*s,self.z*s)
   __rmul__=__mul__
   def norm(self): return (self.x**2+self.y**2+self.z**2)**0.5
   def __eq__(self,o): return abs(self.x-o.x)<1e-9 and abs(self.y-o.y)<1e-9 and abs(self.z-o.z)<1e-9
   def __repr__(self): return f"Vector({self.x:.2f},{self.y:.2f},{self.z:.2f})"
'''))

We now build the core calculation module for our project. In the calc package, we define simple mathematical utilities, including addition, division with error handling, and a moving-average function, to demonstrate logic testing. Alongside this, we create a Vector class that supports arithmetic operations, equality checks, and norm computation, a perfect example for testing custom objects and comparisons using PyTest. Check out the FULL CODES here.

(root/"app"/"io_utils.py").write_text(textwrap.dedent('''
import json, pathlib, time
def save_json(path,obj):
   path=pathlib.Path(path); path.write_text(json.dumps(obj)); return path
def load_json(path): return json.loads(pathlib.Path(path).read_text())
def timed_operation(fn,*a,**kw):
   t0=time.time(); out=fn(*a,**kw); t1=time.time(); return out,t1-t0
'''))
(root/"app"/"api.py").write_text(textwrap.dedent('''
import os, time, random
def fetch_username(uid):
   if os.environ.get("API_MODE")=="offline": return f"cached_{uid}"
   time.sleep(0.001); return f"user_{uid}_{random.randint(100,999)}"
'''))


(root/"tests"/"test_calc.py").write_text(textwrap.dedent('''
import pytest, math
from calc import add,div,moving_avg
from calc.vector import Vector
@pytest.mark.parametrize("a,b,exp",[(1,2,3),(0,0,0),(-1,1,0)])
def test_add(a,b,exp): assert add(a,b)==exp
@pytest.mark.parametrize("a,b,exp",[(6,3,2),(8,2,4)])
def test_div(a,b,exp): assert div(a,b)==exp
@pytest.mark.xfail(raises=ZeroDivisionError)
def test_div_zero(): div(1,0)
def test_avg(): assert moving_avg([1,2,3,4,5],3)==[2,3,4]
def test_vector_ops(): v=Vector(1,2,3)+Vector(4,5,6); assert v==Vector(5,7,9)
'''))


(root/"tests"/"test_io_api.py").write_text(textwrap.dedent('''
import pytest, os
from app.io_utils import save_json,load_json,timed_operation
from app.api import fetch_username
@pytest.mark.io
def test_io(temp_json_file,tmp_path):
   d={"x":5}; p=tmp_path/"a.json"; save_json(p,d); assert load_json(p)==d
   assert load_json(temp_json_file)=={"msg":"hi"}
def test_timed(capsys):
   val,dt=timed_operation(lambda x:x*3,7); print("dt=",dt); out=capsys.readouterr().out
   assert "dt=" in out and val==21
@pytest.mark.api
def test_api(monkeypatch):
   monkeypatch.setenv("API_MODE","offline")
   assert fetch_username(9)=="cached_9"
'''))


(root/"tests"/"test_slow.py").write_text(textwrap.dedent('''
import time, pytest
@pytest.mark.slow
def test_slow(event_log,fake_clock):
   event_log.append(f"start@{fake_clock['now']}")
   fake_clock["now"]+=3.0
   event_log.append(f"end@{fake_clock['now']}")
   assert len(event_log)==2
'''))

We add lightweight app utilities for JSON I/O and a mocked API to exercise real-world behaviors without external services. We write focused tests that use parametrization, xfail, markers, tmp_path, capsys, and monkeypatch to validate logic and side effects. We include a slow test wired to our event_log and fake_clock fixtures to demonstrate controlled timing and session-wide state. Check out the FULL CODES here.

print("📦 Project created at:", root)
print("n▶ RUN #1 (default, skips @slow)n")
r1=subprocess.run([sys.executable,"-m","pytest",str(root)],text=True)
print("n▶ RUN #2 (--runslow)n")
r2=subprocess.run([sys.executable,"-m","pytest",str(root),"--runslow"],text=True)


summary_file=root/"summary.json"
summary={
   "total_tests":sum("test_" in str(p) for p in root.rglob("test_*.py")),
   "runs": ["default","--runslow"],
   "results": ["success" if r1.returncode==0 else "fail",
               "success" if r2.returncode==0 else "fail"],
   "contains_slow_tests": True,
   "example_event_log":["start@1000.0","end@1003.0"]
}
summary_file.write_text(json.dumps(summary,indent=2))
print("n📊 FINAL SUMMARY")
print(json.dumps(summary,indent=2))
print("n✅ Tutorial completed — all tests & summary generated successfully.")

We now run our test suite twice: first with the default configuration that skips slow tests, and then again with the –runslow flag to include them. After both runs, we generate a JSON summary containing test outcomes, the total number of test files, and a sample event log. This final summary gives us a clear snapshot of our project’s testing health, confirming that all components work flawlessly from start to finish.

In conclusion, we see how PyTest helps us test smarter, not harder. We design a plugin that tracks results, uses fixtures for state management, and controls slow tests with custom options, all while keeping the workflow clean and modular. We conclude with a detailed JSON summary that demonstrates how easily PyTest can integrate with modern CI and analytics pipelines. With this foundation, we are now confident to extend PyTest further, combining coverage, benchmarking, or even parallel execution for large-scale, professional-grade testing.


Check out the FULL CODES here. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

The post A Coding Implementation of Advanced PyTest to Build Customized and Automated Testing with Plugins, Fixtures, and JSON Reporting appeared first on MarkTechPost.