Received: by 2002:a05:7412:f690:b0:e2:908c:2ebd with SMTP id ej16csp768185rdb; Thu, 19 Oct 2023 20:42:43 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEdk3uqhzvw7nNCR0p20lni6Zo2ckXbqNO8RMWnAvKtQ+CTooSJdNKz4m7Np9qivXuNfdf5 X-Received: by 2002:a05:6a20:a128:b0:16a:4f24:d30 with SMTP id q40-20020a056a20a12800b0016a4f240d30mr779128pzk.53.1697773363291; Thu, 19 Oct 2023 20:42:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697773363; cv=none; d=google.com; s=arc-20160816; b=UM1qUrxIskSTXeWSpM0Hn9eiOqXdsxWMo95byYL8JdX1g/PYooHCYrzYpv0csfLRJG 3vtTCxdHfM0tz+3yKICJIVkPx348TxSG3fKzko51UYuN0HB23roipFlhwbjjGHC7DiKl r7X+CDxHH2LucU8Rg6FTcbjEfGjmwA3Xm1x7uNSBxekXMoX494ATapx+DxshZPISrcpu Z1w/1NSFfBauojx9VoEKGwMYoUJxDGSDCYdWgRRepITxR9z8tWjMmHNrAcQ7zmRJ37v9 Q9Br6hjh3o7bFok5GJmh+DRKYfiq8oQOho47+YxJHippeb4ho/5TW4V6kmHrphc2XxR5 BqwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Fnsf6GlEX2Jyo2gQEEuU5jv/bVBQXpNZ8CjJ+88B3KM=; fh=UPOc/vNKmS11Cx9LD1T6tbphoqCgop1s9hN2odxfr8M=; b=aHlhkzG0h6OwoHcTK0g+GBZeGT/sVt+nIervW9BicQnsy1n0DaET/Q3twg1F3oPNni Au3lH6N1/zJip9MD5mlzCj2jw35wEiupcEGPMmUaJWuMxsEasBfnVTGh9icPaBvffgMS mQQy/lQzcK6bAPZP+7XF54XhNWkk1R9g+EU3Iw8unduE+2iyFMgXOT78dvfDVIqroQky 3P3OBAcayW+N9mEpj24qggpQR2TZeUO78C27DgSRsjD+VWmjFmCfm5fTn9+JiXmSmDCu 3MeLLH9IyFJinnGtfOJ3IjEcOuSDvNPuvs17XI6jc9+NH+jl7FpRIK2USUaKQBQCbY/R Hkmg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=HFCmjSmJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Return-Path: Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id k2-20020a633d02000000b0057d7cff25b8si990505pga.198.2023.10.19.20.42.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Oct 2023 20:42:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=HFCmjSmJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 44B6F83B7A8F; Thu, 19 Oct 2023 20:42:39 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346932AbjJTDmL (ORCPT + 99 others); Thu, 19 Oct 2023 23:42:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346976AbjJTDmH (ORCPT ); Thu, 19 Oct 2023 23:42:07 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85AE3D6D for ; Thu, 19 Oct 2023 20:42:03 -0700 (PDT) Received: from localhost.localdomain (unknown [IPv6:2804:14d:e646:872b:8302:9b9b:d59b:1681]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: koike) by madras.collabora.co.uk (Postfix) with ESMTPSA id 8E211660734C; Fri, 20 Oct 2023 04:41:57 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1697773322; bh=3KvurdBsxPg2yww+IIMuQztpiv3V6PRopwLXztRsscA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HFCmjSmJTlJnRuClG1ZTbXBWiZYhVWNOSfMjIe7AT4MUFxtgAqTIKD84Wk6g6V7l+ 7r/sBuVcJFJQ44WvZjVh6UgzQgg4mh38dmE/8RTWKeYoGSaRXOjx0EVthHXcActlzi UB7SG8vUYNIpGB8Qc71Vc7w8zaXxl3PXUGgXRcjcpHV/I8K0XW8oAiK6eGh9kUVYe3 MPF3ixEcp/kZg1Y2jC+RUarLbkGycfo1O/3YpJfu76gQTW5dXSMkQZ8vjaOPeMhzqS kRxBv47MfYv9vEyOFfY+Ikxn14HjhRNTy7FzbZS8s+lFw4l24FQ2Rl6i2e0lS2fCXS Wcg+twzKyAcUQ== From: Helen Koike To: dri-devel@lists.freedesktop.org, Helen Koike , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Daniel Vetter Cc: robdclark@chromium.org, dmitry.baryshkov@linaro.org, vignesh.raman@collabora.com, sergi.blanch.torne@collabora.com, guilherme.gallo@collabora.com, david.heidelberg@collabora.com, quic_abhinavk@quicinc.com, quic_jesszhan@quicinc.com, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/9] drm/ci: add helper script update-xfails.py Date: Fri, 20 Oct 2023 00:41:18 -0300 Message-Id: <20231020034124.136295-4-helen.koike@collabora.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231020034124.136295-1-helen.koike@collabora.com> References: <20231020034124.136295-1-helen.koike@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Thu, 19 Oct 2023 20:42:39 -0700 (PDT) Add helper script that given a gitlab pipeline url, analyse which are the failures and flakes and update the xfails folder accordingly. Example: Trigger a pipeline in gitlab infrastructure, than re-try a few jobs more than once (so we can have data if failures are consistent across jobs with the same name or if they are flakes) and execute: update-xfails.py https://gitlab.freedesktop.org/helen.fornazier/linux/-/pipelines/970661 git diff should show you that it updated files in xfails folder. Signed-off-by: Helen Koike Tested-by: Vignesh Raman Reviewed-by: David Heidelberg --- Hello, This script is being very handy for me, so I suppose it could be handy to others, since I'm publishing it in the xfails folder. Let me know your thoughts. Derivative work from the RFC: https://patchwork.kernel.org/project/dri-devel/patch/20230925195556.106090-1-helen.koike@collabora.com/ what changed: - refactor and fix the script, it had several bugs - change the output to show a diff of what has changed Regards, Helen --- v2: - fixed typos - remove test from fails.txt before adding to flakes.txt if present - fix when the failures.csv is an empty string --- drivers/gpu/drm/ci/xfails/requirements.txt | 17 ++ drivers/gpu/drm/ci/xfails/update-xfails.py | 204 +++++++++++++++++++++ 2 files changed, 221 insertions(+) create mode 100644 drivers/gpu/drm/ci/xfails/requirements.txt create mode 100755 drivers/gpu/drm/ci/xfails/update-xfails.py diff --git a/drivers/gpu/drm/ci/xfails/requirements.txt b/drivers/gpu/drm/ci/xfails/requirements.txt new file mode 100644 index 000000000000..f64fa608b3c4 --- /dev/null +++ b/drivers/gpu/drm/ci/xfails/requirements.txt @@ -0,0 +1,17 @@ +git+https://gitlab.freedesktop.org/gfx-ci/ci-collate@811b4c3f7e6e372af6225c03843fc6717847bdbc +termcolor==2.3.0 + +# ci-collate dependencies +certifi==2023.7.22 +charset-normalizer==3.2.0 +idna==3.4 +pip==23.2.1 +python-gitlab==3.15.0 +requests==2.31.0 +requests-toolbelt==1.0.0 +ruamel.yaml==0.17.32 +ruamel.yaml.clib==0.2.7 +setuptools==68.0.0 +tenacity==8.2.3 +urllib3==2.0.4 +wheel==0.41.1 \ No newline at end of file diff --git a/drivers/gpu/drm/ci/xfails/update-xfails.py b/drivers/gpu/drm/ci/xfails/update-xfails.py new file mode 100755 index 000000000000..fd13e5bcde8c --- /dev/null +++ b/drivers/gpu/drm/ci/xfails/update-xfails.py @@ -0,0 +1,204 @@ +#!/usr/bin/env python3 + +import argparse +from collections import defaultdict +import difflib +import os +import re +from glcollate import Collate +from termcolor import colored +from urllib.parse import urlparse + + +def get_canonical_name(job_name): + return re.split(r" \d+/\d+", job_name)[0] + + +def get_xfails_file_path(job_name, suffix): + canonical_name = get_canonical_name(job_name) + name = canonical_name.replace(":", "-") + script_dir = os.path.dirname(os.path.abspath(__file__)) + return os.path.join(script_dir, f"{name}-{suffix}.txt") + + +def get_unit_test_name_and_results(unit_test): + if "Artifact results/failures.csv not found" in unit_test or '' == unit_test: + return None, None + unit_test_name, unit_test_result = unit_test.strip().split(",") + return unit_test_name, unit_test_result + + +def read_file(file_path): + try: + with open(file_path, "r") as file: + f = file.readlines() + if len(f): + f[-1] = f[-1].strip() + "\n" + return f + except FileNotFoundError: + return [] + + +def save_file(content, file_path): + # delete file is content is empty + if not content or not any(content): + if os.path.exists(file_path): + os.remove(file_path) + return + + with open(file_path, "w") as file: + file.writelines(content) + + +def is_test_present_on_file(file_content, unit_test_name): + return any(unit_test_name in line for line in file_content) + + +def is_unit_test_present_in_other_jobs(unit_test, job_ids): + return all(unit_test in job_ids[job_id] for job_id in job_ids) + + +def remove_unit_test_if_present(lines, unit_test_name): + if not is_test_present_on_file(lines, unit_test_name): + return + lines[:] = [line for line in lines if unit_test_name not in line] + + +def add_unit_test_if_not_present(lines, unit_test_name, file_name): + # core_getversion is mandatory + if "core_getversion" in unit_test_name: + print("WARNING: core_getversion should pass, not adding it to", os.path.basename(file_name)) + elif all(unit_test_name not in line for line in lines): + lines.append(unit_test_name + "\n") + + +def update_unit_test_result_in_fails_txt(fails_txt, unit_test): + unit_test_name, unit_test_result = get_unit_test_name_and_results(unit_test) + for i, line in enumerate(fails_txt): + if unit_test_name in line: + _, current_result = get_unit_test_name_and_results(line) + fails_txt[i] = unit_test + "\n" + return + + +def add_unit_test_or_update_result_to_fails_if_present(fails_txt, unit_test, fails_txt_path): + unit_test_name, _ = get_unit_test_name_and_results(unit_test) + if not is_test_present_on_file(fails_txt, unit_test_name): + add_unit_test_if_not_present(fails_txt, unit_test, fails_txt_path) + # if it is present but not with the same result + elif not is_test_present_on_file(fails_txt, unit_test): + update_unit_test_result_in_fails_txt(fails_txt, unit_test) + + +def split_unit_test_from_collate(xfails): + for job_name in xfails.keys(): + for job_id in xfails[job_name].copy().keys(): + if "not found" in xfails[job_name][job_id]: + del xfails[job_name][job_id] + continue + xfails[job_name][job_id] = xfails[job_name][job_id].strip().split("\n") + + +def get_xfails_from_pipeline_url(pipeline_url): + parsed_url = urlparse(pipeline_url) + path_components = parsed_url.path.strip("/").split("/") + + namespace = path_components[0] + project = path_components[1] + pipeline_id = path_components[-1] + + print("Collating from:", namespace, project, pipeline_id) + xfails = ( + Collate(namespace=namespace, project=project) + .from_pipeline(pipeline_id) + .get_artifact("results/failures.csv") + ) + + split_unit_test_from_collate(xfails) + return xfails + + +def get_xfails_from_pipeline_urls(pipelines_urls): + xfails = defaultdict(dict) + + for url in pipelines_urls: + new_xfails = get_xfails_from_pipeline_url(url) + for key in new_xfails: + xfails[key].update(new_xfails[key]) + + return xfails + + +def print_diff(old_content, new_content, file_name): + diff = difflib.unified_diff(old_content, new_content, lineterm="", fromfile=file_name, tofile=file_name) + diff = [colored(line, "green") if line.startswith("+") else + colored(line, "red") if line.startswith("-") else line for line in diff] + print("\n".join(diff[:3])) + print("".join(diff[3:])) + + +def main(pipelines_urls, only_flakes): + xfails = get_xfails_from_pipeline_urls(pipelines_urls) + + for job_name in xfails.keys(): + fails_txt_path = get_xfails_file_path(job_name, "fails") + flakes_txt_path = get_xfails_file_path(job_name, "flakes") + + fails_txt = read_file(fails_txt_path) + flakes_txt = read_file(flakes_txt_path) + + fails_txt_original = fails_txt.copy() + flakes_txt_original = flakes_txt.copy() + + for job_id in xfails[job_name].keys(): + for unit_test in xfails[job_name][job_id]: + unit_test_name, unit_test_result = get_unit_test_name_and_results(unit_test) + + if not unit_test_name: + continue + + if only_flakes: + remove_unit_test_if_present(fails_txt, unit_test_name) + add_unit_test_if_not_present(flakes_txt, unit_test_name, flakes_txt_path) + continue + + # drop it from flakes if it is present to analyze it again + remove_unit_test_if_present(flakes_txt, unit_test_name) + + if unit_test_result == "UnexpectedPass": + remove_unit_test_if_present(fails_txt, unit_test_name) + # flake result + if not is_unit_test_present_in_other_jobs(unit_test, xfails[job_name]): + add_unit_test_if_not_present(flakes_txt, unit_test_name, flakes_txt_path) + continue + + # flake result + if not is_unit_test_present_in_other_jobs(unit_test, xfails[job_name]): + remove_unit_test_if_present(fails_txt, unit_test_name) + add_unit_test_if_not_present(flakes_txt, unit_test_name, flakes_txt_path) + continue + + # consistent result + add_unit_test_or_update_result_to_fails_if_present(fails_txt, unit_test, + fails_txt_path) + + fails_txt.sort() + flakes_txt.sort() + + if fails_txt != fails_txt_original: + save_file(fails_txt, fails_txt_path) + print_diff(fails_txt_original, fails_txt, os.path.basename(fails_txt_path)) + if flakes_txt != flakes_txt_original: + save_file(flakes_txt, flakes_txt_path) + print_diff(flakes_txt_original, flakes_txt, os.path.basename(flakes_txt_path)) + + +if __name__ == "__main__": + parser = argparse.ArgumentParser(description="Update xfails from a given pipeline.") + parser.add_argument("pipeline_urls", nargs="+", type=str, help="URLs to the pipelines to analise the failures.") + parser.add_argument("--only-flakes", action="store_true", help="Treat every detected failure as a flake, edit *-flakes.txt only.") + + args = parser.parse_args() + + main(args.pipeline_urls, args.only_flakes) + print("Done.") -- 2.39.2