Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp4095528imm; Mon, 17 Sep 2018 08:08:03 -0700 (PDT) X-Google-Smtp-Source: ANB0VdaoqbNoZQO6S0GfYdWThyuRZcnRIOaYrXqtLqKsN75V9s0i/zQ9azHqCTtwHhyRkdXbK0kH X-Received: by 2002:a62:9cd7:: with SMTP id u84-v6mr26446749pfk.90.1537196883178; Mon, 17 Sep 2018 08:08:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537196883; cv=none; d=google.com; s=arc-20160816; b=b8T7X4GuLQ4smbwpQ1Sg2dg32bSGCCPOelT0n/8astbnL4xBYPHLf6KNNzhe22wtve yn9Jxv/P+F/JCMnLogtcnYdnA9/upn7JRieqJ1CUUTmxq2DqJPCj/mWQ0k4IFDaAEyLS vcP+zAUy6xlUOEV9T86xyrHi8/kVXhDbpvGZM/nK2D5QDkV6YxzWTyth07lI0HtIm15N 7jeyBzQ8yvH+mxJ1ZXiL4lkoXKrxdzOzTl+LsnsqygrywbLvJs+jtotzdbsHDMWy8mG7 FtVnh/stYTpzP1pn72R2Hh8ElIMQcOeBzSywe26O9KwtmCAq0u5Dlac6dLhsEbKzKnT/ BsvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=+MQ5Opc1w2DWok90ZW0LW5IG4LSTlX7Ce8JT9qbEssk=; b=nZs6oxw/mOroIE7gXQuLw32nVWb0eyT42E7dzLgHIj+v+82drQdg5TC+jhrgTpGIlm sftBAi8NoFaX2JNb7+D1pEQoCE9082nHVh4GXjOaYYyTwwbmEBPu4W3hGypUYCvJSWFO MAf6C563fjitFMjgud2ebHQI3gWFmuj7mkZNzsHSjsmQXlnhdWMr+9UI5d2fE7PbbLKA tSV7ZwFdD4sTbxhYCYUwr6jpgv1KfY79uJTgmf1Mb4ybNh7q7znMJtzUfOYnDjufLSmp s6IKYT/BioYd9RF8jdCaXJ6OSr25mewwM7Amd75xP+u0f3lzEwIa/lkjXL+95cWtE4NP 19zg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p14-v6si15874691pgg.67.2018.09.17.08.07.48; Mon, 17 Sep 2018 08:08:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729206AbeIQUdy (ORCPT + 99 others); Mon, 17 Sep 2018 16:33:54 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:47273 "EHLO smtp2200-217.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728437AbeIQUdy (ORCPT ); Mon, 17 Sep 2018 16:33:54 -0400 X-Alimail-AntiSpam: AC=CONTINUE;BC=0.07450628|-1;CH=green;FP=0|0|0|0|0|-1|-1|-1;HT=e02c03293;MF=ren_guo@c-sky.com;NM=1;PH=DS;RN=14;RT=14;SR=0;TI=SMTPD_---.Cs.vR8o_1537196733; Received: from localhost(mailfrom:ren_guo@c-sky.com fp:SMTPD_---.Cs.vR8o_1537196733) by smtp.aliyun-inc.com(10.147.40.26); Mon, 17 Sep 2018 23:05:33 +0800 Date: Mon, 17 Sep 2018 23:05:32 +0800 From: Guo Ren To: Peter Zijlstra Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, daniel.lezcano@linaro.org, jason@lakedaemon.net, arnd@arndb.de, devicetree@vger.kernel.org, andrea.parri@amarulasolutions.com, c-sky_gcc_upstream@c-sky.com, gnu-csky@mentor.com, thomas.petazzoni@bootlin.com, wbx@uclibc-ng.org, green.hu@gmail.com Subject: Re: [PATCH V3 11/27] csky: Atomic operations Message-ID: <20180917150532.GC2612@guoren-Inspiron-7460> References: <93e8b592e429c156ad4d4ca5d85ef48fd0ab8b70.1536757532.git.ren_guo@c-sky.com> <20180912155514.GV24082@hirez.programming.kicks-ass.net> <20180915145512.GA18355@guoren-Inspiron-7460> <20180917081755.GO24124@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180917081755.GO24124@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 17, 2018 at 10:17:55AM +0200, Peter Zijlstra wrote: > On Sat, Sep 15, 2018 at 10:55:13PM +0800, Guo Ren wrote: > > > > +#define ATOMIC_OP_RETURN(op, c_op) \ > > > > > +#define ATOMIC_FETCH_OP(op, c_op) \ > > > > For these you could generate _relaxed variants and not provide smp_mb() > > > inside them. > > Ok, but I'll modify it in next commit. > > That's fine. Just wanted to let you know about _relaxed() since it will > benefit your platform. Thank you. > > > > +#define ATOMIC_OP(op, c_op) \ > > > > +static inline void atomic_##op(int i, atomic_t *v) \ > > > > +{ \ > > > > + unsigned long tmp, flags; \ > > > > + \ > > > > + raw_local_irq_save(flags); \ > > > > + \ > > > > + asm volatile ( \ > > > > + " ldw %0, (%2) \n" \ > > > > + " " #op " %0, %1 \n" \ > > > > + " stw %0, (%2) \n" \ > > > > + : "=&r" (tmp) \ > > > > + : "r" (i), "r"(&v->counter) \ > > > > + : "memory"); \ > > > > + \ > > > > + raw_local_irq_restore(flags); \ > > > > +} > > > > > > Is this really 'better' than the generic UP fallback implementation? > > There is a lock irq instruction "idly4" with out irq_save. eg: > > asm volatile ( \ > > " idly4 \n" \ > > " ldw %0, (%2) \n" \ > > " " #op " %0, %1 \n" \ > > " stw %0, (%2) \n" \ > > I'll change to that after full tested. > > That is pretty nifty, could you explain (or reference me to a arch doc > that does) the exact semantics of that "idly4" instruction? The idly4 allows the 4 instructions behind it to not respond to interrupts. When ldw got exception, it will cause the carry to be 1. So I need prepare the assemble like this: 1: cmpne r0, r0 idly4 ldw %0, (%2) bt 1b " #op " ... stw ... I need more stress test on it and then I'll change to it. > > > > +static inline void arch_spin_lock(arch_spinlock_t *lock) > > > > +{ > > > > + arch_spinlock_t lockval; > > > > + u32 ticket_next = 1 << TICKET_NEXT; > > > > + u32 *p = &lock->lock; > > > > + u32 tmp; > > > > + > > > > + smp_mb(); > > > > > > spin_lock() doesn't need smp_mb() before. > > read_lock and write_lock also needn't smp_mb() before, isn't it? > > Correct. The various *_lock() functions only need imply an ACQUIRE > barrier, such that the critical section happens after the lock is taken. > > > > > + > > > > +static inline void arch_spin_unlock(arch_spinlock_t *lock) > > > > +{ > > > > + smp_mb(); > > > > + lock->tickets.owner++; > > > > + smp_mb(); > > > > > > spin_unlock() doesn't need smp_mb() after. > > read_unlock and write_unlock also needn't smp_mb() after, isn't it? > > Indeed so, the various *_unlock() functions only need imply a RELEASE > barrier, such that the critical section happend before the lock is > released. > > In both cases (lock and unlock) there is a great amount of subtle > details, but most of that is irrelevant if all you have is smp_mb(). Got it, Thx for the explanation. > > > > > > +/* > > > > + * Test-and-set spin-locking. > > > > + */ > > > > > > Why retain that? > > > > > > same comments; it has far too many smp_mb()s in. > > I'm not sure about queued_rwlocks and just for 2-cores-smp test-and-set is > > faster and simpler, isn't it? > > Even on 2 cores I think you can create starvation cases with > test-and-set spinlocks. And the maintenace overhead of carrying two lock > implementations is non trivial. > > As to performance; I cannot say, but the ticket lock isn't very > expensive, you could benchmark of course. Ticket lock is good. But How about queued_rwlocks v.s my_test_set_rwlock? I'm not sure about the queued_rwlocks. I just implement the ticket-spinlock. Best Regards Guo Ren