Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp1094452rwb; Thu, 8 Dec 2022 06:44:13 -0800 (PST) X-Google-Smtp-Source: AA0mqf53Pd5nuOI58XWBEMn8vwhgj9iODk5Sgn5fQcalByWoY5CuFsaSjxMLbEZWG0quiQ6uH8qa X-Received: by 2002:a17:906:1412:b0:7a0:3313:a775 with SMTP id p18-20020a170906141200b007a03313a775mr70251378ejc.474.1670510653492; Thu, 08 Dec 2022 06:44:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670510653; cv=none; d=google.com; s=arc-20160816; b=rc2tQnvNJYgBLf7vVIOuayUvf2qE6ttKoJqMCl9ohpyIAMOM+Hyw08D00Wg5OeHdiI panLTpdcPS7SVp4YQ1mFdDV0cm+s/EJOd9hnszgWhN0YB9Aw//kB/ROYs0JDOKEeSl9R x011YnALmKB84CLqc9CPSbZ9zozTNIcWGQmDJ2KfOhpTgIe/keVmssSH7P6gJaH8o7bZ fZdpHcgcuiWI1hu+NiqIOjbrzaPE0GblRcOFcce3VkMdIGf93b7CbA++9kf5vcsdqbV9 FZbJkNtZSONC3AiNxRORYbV/DzzF5T/mFUjQsfsJzWUwhvoiLammq5NGlILw5xWQBuR8 WTbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=awkCMJfui5JXntKNBPmTRfwdJIskPwSnNDmoSFudBxg=; b=AnaxU5K2r4gnNFvyzhCRzb0blc12TGeZc2JKE1ZCWxpRiKCtSjxdwyCHf+q78Hhz5+ APbEMz9EEcbc9PfoqS1dVvTKaAXsFD2qCqOA5a4BIjaeDPZIj3md4LyScDyk/TRGpToE SnXHOgv6HrZNd6MTsD+5EGcZoITGjg3rG4akr9TLYlX7r9UB4Cc4F3OHb0ZFXG6u2GL1 209BMtDYrlFDln1prA1BTKJ2dQI1PyPT5sm8XK4mdnEPo7PTWRa0MNrdzeiwVdiYLt9u VDv5+a482T+PdyhaVnmpt+sClHdgbwpk6pFynyuj2KNDwkmZw8GDZ707swfQ8ntsyZDw zrTA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i10-20020a170906264a00b007c07a807a78si14700708ejc.215.2022.12.08.06.43.54; Thu, 08 Dec 2022 06:44:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229901AbiLHOXm (ORCPT + 72 others); Thu, 8 Dec 2022 09:23:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230150AbiLHOXN (ORCPT ); Thu, 8 Dec 2022 09:23:13 -0500 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3505A98EBF for ; Thu, 8 Dec 2022 06:22:26 -0800 (PST) Received: from dggpemm500017.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4NSbqd15PNzJpDN; Thu, 8 Dec 2022 22:18:53 +0800 (CST) Received: from build.huawei.com (10.175.101.6) by dggpemm500017.china.huawei.com (7.185.36.178) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Thu, 8 Dec 2022 22:22:24 +0800 From: Wenchao Hao To: Steven Rostedt , Masami Hiramatsu , Andrew Morton , Wenchao Hao , , CC: , Subject: [PATCH] cma:tracing: Print alloc result in trace_cma_alloc_finish Date: Thu, 8 Dec 2022 22:21:30 +0800 Message-ID: <20221208142130.1501195-1-haowenchao@huawei.com> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.101.6] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500017.china.huawei.com (7.185.36.178) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The result of allocation is not printed in trace_cma_alloc_finish now, while it's important to do it so we can set filters to catch specific error on allocation or trigger some operations on specific error. Although we have printed the result in log, but the log is conditional and could not be filtered by tracing event. What's more, it introduce little overhead to print this result. The result of allocation is named as errorno in trace. Signed-off-by: Wenchao Hao --- include/trace/events/cma.h | 32 +++++++++++++++++++++++++++++--- mm/cma.c | 2 +- 2 files changed, 30 insertions(+), 4 deletions(-) diff --git a/include/trace/events/cma.h b/include/trace/events/cma.h index 3d708dae1542..ef75ea606ab2 100644 --- a/include/trace/events/cma.h +++ b/include/trace/events/cma.h @@ -91,12 +91,38 @@ TRACE_EVENT(cma_alloc_start, __entry->align) ); -DEFINE_EVENT(cma_alloc_class, cma_alloc_finish, +TRACE_EVENT(cma_alloc_finish, TP_PROTO(const char *name, unsigned long pfn, const struct page *page, - unsigned long count, unsigned int align), + unsigned long count, unsigned int align, int errorno), - TP_ARGS(name, pfn, page, count, align) + TP_ARGS(name, pfn, page, count, align, errorno), + + TP_STRUCT__entry( + __string(name, name) + __field(unsigned long, pfn) + __field(const struct page *, page) + __field(unsigned long, count) + __field(unsigned int, align) + __field(int, errorno) + ), + + TP_fast_assign( + __assign_str(name, name); + __entry->pfn = pfn; + __entry->page = page; + __entry->count = count; + __entry->align = align; + __entry->errorno = errorno; + ), + + TP_printk("name=%s pfn=0x%lx page=%p count=%lu align=%u errorno=%d", + __get_str(name), + __entry->pfn, + __entry->page, + __entry->count, + __entry->align, + __entry->errorno) ); DEFINE_EVENT(cma_alloc_class, cma_alloc_busy_retry, diff --git a/mm/cma.c b/mm/cma.c index 4a978e09547a..a75b17b03b66 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -491,7 +491,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, start = bitmap_no + mask + 1; } - trace_cma_alloc_finish(cma->name, pfn, page, count, align); + trace_cma_alloc_finish(cma->name, pfn, page, count, align, ret); /* * CMA can allocate multiple page blocks, which results in different -- 2.32.0