2010-08-29 18:49:12

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver

Hi Hemanth,

On Fri, May 21, 2010 at 12:22:57PM +0530, Hemanth V wrote:
> From: Hemanth V <[email protected]>
> Date: Thu, 20 May 2010 20:18:17 +0530
> Subject: [PATCH] input: CMA3000 Accelerometer Driver
>
> This patch adds support for CMA3000 Tri-axis accelerometer, which
> supports Motion detect, Measurement and Free fall modes.
> CMA3000 supports both I2C/SPI bus for communication, currently the
> driver supports I2C based communication.
>
> Driver reports acceleration data through input subsystem and supports
> sysfs for configuration changes.
>
> This is V2 of patch, which fixes open source review comments
>
> Signed-off-by: Hemanth V <[email protected]>
> Cc: Dmitry Torokhov <[email protected]>
> ---
> Documentation/input/cma3000_d0x.txt | 112 ++++++
> drivers/input/misc/Kconfig | 7 +
> drivers/input/misc/Makefile | 1 +
> drivers/input/misc/cma3000_d0x.c | 633 ++++++++++++++++++++++++++++++++++
> drivers/input/misc/cma3000_d0x.h | 46 +++
> drivers/input/misc/cma3000_d0x_i2c.c | 136 ++++++++
> include/linux/i2c/cma3000.h | 60 ++++
> 7 files changed, 995 insertions(+), 0 deletions(-)
> create mode 100644 Documentation/input/cma3000_d0x.txt
> create mode 100644 drivers/input/misc/cma3000_d0x.c
> create mode 100644 drivers/input/misc/cma3000_d0x.h
> create mode 100644 drivers/input/misc/cma3000_d0x_i2c.c
> create mode 100644 include/linux/i2c/cma3000.h
>
> diff --git a/Documentation/input/cma3000_d0x.txt b/Documentation/input/cma3000_d0x.txt
> new file mode 100644
> index 0000000..29ab6b7
> --- /dev/null
> +++ b/Documentation/input/cma3000_d0x.txt
> @@ -0,0 +1,112 @@
> +Kernel driver for CMA3000-D0x
> +============================
> +
> +Supported chips:
> +* VTI CMA3000-D0x
> +Datasheet:
> + CMA3000-D0X Product Family Specification 8281000A.02.pdf
> +
> +Author: Hemanth V <[email protected]>
> +
> +
> +Description
> +-----------
> +CMA3000 Tri-axis accelerometer supports Motion detect, Measurement and
> +Free fall modes.
> +
> +Motion Detect Mode: Its the low power mode where interrupts are generated only
> +when motion exceeds the defined thresholds.
> +
> +Measurement Mode: This mode is used to read the acceleration data on X,Y,Z
> +axis and supports 400, 100, 40 Hz sample frequency.
> +
> +Free fall Mode: This mode is intented to save system resources.
> +
> +Threshold values: Chip supports defining threshold values for above modes
> +which includes time and g value. Refer product specifications for more details.
> +
> +CMA3000 supports both I2C/SPI bus for communication, currently the driver
> +supports I2C based communication.
> +
> +Driver reports acceleration data through input subsystem and supports sysfs
> +for configuration changes. It generates ABS_MISC event with value 1 when
> +free fall is detected.
> +
> +Platform data need to be configured for initial default values.
> +
> +Platform Data
> +-------------
> +fuzz_x: Noise on X Axis
> +
> +fuzz_y: Noise on Y Axis
> +
> +fuzz_z: Noise on Z Axis
> +
> +g_range: G range in milli g i.e 2000 or 8000
> +
> +mode: Default Operating mode
> +
> +mdthr: Motion detect threshold value
> +
> +mdfftmr: Motion detect and free fall time value
> +
> +ffthr: Free fall threshold value
> +
> +Input Interface
> +--------------
> +Input driver version is 1.0.0
> +Input device ID: bus 0x18 vendor 0x0 product 0x0 version 0x0
> +Input device name: "cma3000-acclerometer"
> +Supported events:
> + Event type 0 (Sync)
> + Event type 3 (Absolute)
> + Event code 0 (X)
> + Value 47
> + Min -8000
> + Max 8000
> + Fuzz 200
> + Event code 1 (Y)
> + Value -28
> + Min -8000
> + Max 8000
> + Fuzz 200
> + Event code 2 (Z)
> + Value 905
> + Min -8000
> + Max 8000
> + Fuzz 200
> + Event code 40 (Misc)
> + Value 0
> + Min 0
> + Max 1
> + Event type 4 (Misc)
> +
> +Sysfs entries
> +-------------
> +
> +mode:
> + 0: power down mode
> + 1: 100 Hz Measurement mode
> + 2: 400 Hz Measurement mode
> + 3: 40 Hz Measurement mode
> + 4: Motion Detect mode (default)
> + 5: 100 Hz Free fall mode
> + 6: 40 Hz Free fall mode
> + 7: Power off mode
> +
> +grange:
> + 2000: 2000 mg or 2G Range
> + 8000: 8000 mg or 8G Range
> +
> +mdthr:
> + X: X * 71mg (8G Range)
> + X: X * 18mg (2G Range)
> +
> +mdfftmr:
> + X: (X & 0x70) * 100 ms (MDTMR)
> + (X & 0x0F) * 2.5 ms (FFTMR 400 Hz)
> + (X & 0x0F) * 10 ms (FFTMR 100 Hz)
> +
> +ffthr:
> + X: (X >> 2) * 18mg (2G Range)
> + X: (X & 0x0F) * 71 mg (8G Range)
> diff --git a/drivers/input/misc/Kconfig b/drivers/input/misc/Kconfig
> index 1cf25ee..043ee8d 100755
> --- a/drivers/input/misc/Kconfig
> +++ b/drivers/input/misc/Kconfig
> @@ -340,4 +340,11 @@
> To compile this driver as a module, choose M here: the
> module will be called pcap_keys.
>
> +config INPUT_CMA3000_I2C
> + bool "VTI CMA3000 Tri-axis accelerometer"

Why is it boolean? It should be possible to configure it as a module.

> + depends on I2C && SYSFS

Is SYSFS a hard dependency? Why?

Overall I think you should have a symbol enabling CMA3000 core and then
sub-drivers enabling I2C and SPI interface parts. See adxl134x driver
for the pattern I am looking for. Now that I am done reading the driver
you seem to be pretty much there, just Kconfig needs to be massaged a
bit.

> + help
> + Say Y here if you want to use VTI CMA3000 Accelerometer
> + through I2C interface.
> +
> endif
> diff --git a/drivers/input/misc/Makefile b/drivers/input/misc/Makefile
> index 07ee237..011161d 100644
> --- a/drivers/input/misc/Makefile
> +++ b/drivers/input/misc/Makefile
> @@ -32,3 +32,4 @@
> obj-$(CONFIG_INPUT_WISTRON_BTNS) += wistron_btns.o
> obj-$(CONFIG_INPUT_WM831X_ON) += wm831x-on.o
> obj-$(CONFIG_INPUT_YEALINK) += yealink.o
> +obj-$(CONFIG_INPUT_CMA3000_I2C) += cma3000_d0x.o cma3000_d0x_i2c.o

Please try keep Makefile and Kconfig alphabetically ordered.

> diff --git a/drivers/input/misc/cma3000_d0x.c b/drivers/input/misc/cma3000_d0x.c
> new file mode 100644
> index 0000000..812c464
> --- /dev/null
> +++ b/drivers/input/misc/cma3000_d0x.c
> @@ -0,0 +1,633 @@
> +/*
> + * cma3000_d0x.c
> + * VTI CMA3000_D0x Accelerometer driver
> + * Supports I2C/SPI interfaces
> + *
> + * Copyright (C) 2010 Texas Instruments
> + * Author: Hemanth V <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License version 2 as published by
> + * the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/interrupt.h>
> +#include <linux/delay.h>
> +#include <linux/input.h>
> +#include <linux/platform_device.h>
> +#include <linux/i2c/cma3000.h>
> +
> +#include "cma3000_d0x.h"
> +
> +#define CMA3000_WHOAMI 0x00
> +#define CMA3000_REVID 0x01
> +#define CMA3000_CTRL 0x02
> +#define CMA3000_STATUS 0x03
> +#define CMA3000_RSTR 0x04
> +#define CMA3000_INTSTATUS 0x05
> +#define CMA3000_DOUTX 0x06
> +#define CMA3000_DOUTY 0x07
> +#define CMA3000_DOUTZ 0x08
> +#define CMA3000_MDTHR 0x09
> +#define CMA3000_MDFFTMR 0x0A
> +#define CMA3000_FFTHR 0x0B
> +
> +#define CMA3000_RANGE2G (1 << 7)
> +#define CMA3000_RANGE8G (0 << 7)
> +#define CMA3000_BUSI2C (0 << 4)
> +#define CMA3000_MODEMASK (7 << 1)
> +#define CMA3000_GRANGEMASK (1 << 7)
> +
> +#define CMA3000_STATUS_PERR 1
> +#define CMA3000_INTSTATUS_FFDET (1 << 2)
> +
> +/* Settling time delay in ms */
> +#define CMA3000_SETDELAY 30
> +
> +/* Delay for clearing interrupt in us */
> +#define CMA3000_INTDELAY 44
> +
> +
> +/*
> + * Bit weights in mg for bit 0, other bits need
> + * multipy factor 2^n. Eight bit is the sign bit.
> + */
> +#define BIT_TO_2G 18
> +#define BIT_TO_8G 71
> +
> +/*
> + * Conversion for each of the eight modes to g, depending
> + * on G range i.e 2G or 8G. Some modes always operate in
> + * 8G.
> + */
> +
> +static int mode_to_mg[8][2] = {
> + {0, 0},
> + {BIT_TO_8G, BIT_TO_2G},
> + {BIT_TO_8G, BIT_TO_2G},
> + {BIT_TO_8G, BIT_TO_8G},
> + {BIT_TO_8G, BIT_TO_8G},
> + {BIT_TO_8G, BIT_TO_2G},
> + {BIT_TO_8G, BIT_TO_2G},
> + {0, 0},
> +};
> +
> +static ssize_t cma3000_show_attr_mode(struct device *dev,
> + struct device_attribute *attr,
> + char *buf)
> +{
> + uint8_t mode;
> + struct platform_device *pdev = to_platform_device(dev);
> + struct cma3000_accl_data *data = platform_get_drvdata(pdev);
> +
> + mode = cma3000_read(data, CMA3000_CTRL, "ctrl");
> + if (mode < 0)
> + return mode;
> +
> + return sprintf(buf, "%d\n", (mode & CMA3000_MODEMASK) >> 1);
> +}
> +
> +static ssize_t cma3000_store_attr_mode(struct device *dev,
> + struct device_attribute *attr,
> + const char *buf, size_t count)
> +{
> + struct platform_device *pdev = to_platform_device(dev);
> + struct cma3000_accl_data *data = platform_get_drvdata(pdev);
> + unsigned long val;
> + int error;
> + uint8_t ctrl;
> +
> + error = strict_strtoul(buf, 0, &val);
> + if (error)
> + goto err_op3_failed;
> +
> + if (val < CMAMODE_DEFAULT || val > CMAMODE_POFF) {
> + error = -EINVAL;
> + goto err_op3_failed;
> + }
> +
> + mutex_lock(&data->mutex);
> + val &= (CMA3000_MODEMASK >> 1);
> + ctrl = cma3000_read(data, CMA3000_CTRL, "ctrl");
> + if (ctrl < 0) {
> + error = ctrl;
> + goto err_op2_failed;
> + }
> +
> + ctrl &= ~CMA3000_MODEMASK;
> + ctrl |= (val << 1);
> + data->pdata.mode = val;
> + disable_irq(data->client->irq);
> +
> + error = cma3000_set(data, CMA3000_CTRL, ctrl, "ctrl");
> + if (error < 0)
> + goto err_op1_failed;
> +
> + /* Settling time delay required after mode change */
> + msleep(CMA3000_SETDELAY);
> +
> + enable_irq(data->client->irq);
> + mutex_unlock(&data->mutex);
> + return count;
> +
> +err_op1_failed:
> + enable_irq(data->client->irq);
> +err_op2_failed:
> + mutex_unlock(&data->mutex);
> +err_op3_failed:
> + return error;


Do not like repeated release of resources in main and error path... Can
we do it like:

...
disable_irq();
error = cma3000_set(data, CMA3000_CTRL, ctrl, "ctrl");
if (!error) {
/* Settling time delay required after mode change */
msleep(CMA3000_SETDELAY);
}
enable_irq();
out:
mutex_unlock();
return error ? error : count;


> +}
> +
> +static ssize_t cma3000_show_attr_grange(struct device *dev,
> + struct device_attribute *attr,
> + char *buf)
> +{
> + uint8_t mode;
> + int g_range;
> +
> + struct platform_device *pdev = to_platform_device(dev);
> + struct cma3000_accl_data *data = platform_get_drvdata(pdev);
> +
> + mode = cma3000_read(data, CMA3000_CTRL, "ctrl");
> + if (mode < 0)
> + return mode;
> +
> + g_range = (mode & CMA3000_GRANGEMASK) ? CMARANGE_2G : CMARANGE_8G;
> + return sprintf(buf, "%d\n", g_range);
> +}
> +
> +static ssize_t cma3000_store_attr_grange(struct device *dev,
> + struct device_attribute *attr,
> + const char *buf, size_t count)
> +{
> + struct platform_device *pdev = to_platform_device(dev);
> + struct cma3000_accl_data *data = platform_get_drvdata(pdev);
> + unsigned long val;
> + int error, g_range, fuzz_x, fuzz_y, fuzz_z;
> + uint8_t ctrl;
> +
> + error = strict_strtoul(buf, 0, &val);
> + if (error)
> + goto err_op3_failed;
> +
> + mutex_lock(&data->mutex);
> + ctrl = cma3000_read(data, CMA3000_CTRL, "ctrl");
> + if (ctrl < 0) {
> + error = ctrl;
> + goto err_op2_failed;
> + }
> +
> + ctrl &= ~CMA3000_GRANGEMASK;
> +
> + if (val == CMARANGE_2G) {
> + ctrl |= CMA3000_RANGE2G;
> + data->pdata.g_range = CMARANGE_2G;
> + } else if (val == CMARANGE_8G) {
> + ctrl |= CMA3000_RANGE8G;
> + data->pdata.g_range = CMARANGE_8G;
> + } else {
> + error = -EINVAL;
> + goto err_op2_failed;
> + }
> +
> + g_range = data->pdata.g_range;
> + fuzz_x = data->pdata.fuzz_x;
> + fuzz_y = data->pdata.fuzz_y;
> + fuzz_z = data->pdata.fuzz_z;
> +
> + disable_irq(data->client->irq);
> + error = cma3000_set(data, CMA3000_CTRL, ctrl, "ctrl");
> + if (error < 0)
> + goto err_op1_failed;
> +
> + input_set_abs_params(data->input_dev, ABS_X, -g_range,
> + g_range, fuzz_x, 0);
> + input_set_abs_params(data->input_dev, ABS_Y, -g_range,
> + g_range, fuzz_y, 0);
> + input_set_abs_params(data->input_dev, ABS_Z, -g_range,
> + g_range, fuzz_z, 0);
> +
> + enable_irq(data->client->irq);
> + mutex_unlock(&data->mutex);
> + return count;
> +
> +err_op1_failed:
> + enable_irq(data->client->irq);
> +err_op2_failed:
> + mutex_unlock(&data->mutex);
> +err_op3_failed:
> + return error;
> +}
> +
> +static ssize_t cma3000_show_attr_mdthr(struct device *dev,
> + struct device_attribute *attr,
> + char *buf)
> +{
> + uint8_t mode;
> + struct platform_device *pdev = to_platform_device(dev);
> + struct cma3000_accl_data *data = platform_get_drvdata(pdev);
> +
> + mode = cma3000_read(data, CMA3000_MDTHR, "mdthr");
> + if (mode < 0)
> + return mode;
> +
> + return sprintf(buf, "%d\n", mode);
> +}
> +
> +static ssize_t cma3000_store_attr_mdthr(struct device *dev,
> + struct device_attribute *attr,
> + const char *buf, size_t count)
> +{
> + struct platform_device *pdev = to_platform_device(dev);
> + struct cma3000_accl_data *data = platform_get_drvdata(pdev);
> + unsigned long val;
> + int error;
> +
> + error = strict_strtoul(buf, 0, &val);
> + if (error)
> + return error;
> +
> + mutex_lock(&data->mutex);
> + data->pdata.mdthr = val;
> + disable_irq(data->client->irq);
> + error = cma3000_set(data, CMA3000_MDTHR, val, "mdthr");
> + enable_irq(data->client->irq);
> + mutex_unlock(&data->mutex);
> +
> + /* If there was error during write, return error */
> + if (error < 0)
> + return error;
> + else
> + return count;
> +}
> +
> +static ssize_t cma3000_show_attr_mdfftmr(struct device *dev,
> + struct device_attribute *attr,
> + char *buf)
> +{
> + uint8_t mode;
> +
> + struct platform_device *pdev = to_platform_device(dev);
> + struct cma3000_accl_data *data = platform_get_drvdata(pdev);
> +
> + mode = cma3000_read(data, CMA3000_MDFFTMR, "mdfftmr");
> + if (mode < 0)
> + return mode;
> +
> + return sprintf(buf, "%d\n", mode);
> +}
> +
> +static ssize_t cma3000_store_attr_mdfftmr(struct device *dev,
> + struct device_attribute *attr,
> + const char *buf, size_t count)
> +{
> + struct platform_device *pdev = to_platform_device(dev);
> + struct cma3000_accl_data *data = platform_get_drvdata(pdev);
> + unsigned long val;
> + int error;
> +
> + error = strict_strtoul(buf, 0, &val);
> + if (error)
> + return error;
> +
> + mutex_lock(&data->mutex);
> + data->pdata.mdfftmr = val;
> + disable_irq(data->client->irq);
> + error = cma3000_set(data, CMA3000_MDFFTMR, val, "mdthr");
> + enable_irq(data->client->irq);
> + mutex_unlock(&data->mutex);
> +
> + /* If there was error during write, return error */
> + if (error < 0)
> + return error;
> + else
> + return count;
> +}
> +
> +static ssize_t cma3000_show_attr_ffthr(struct device *dev,
> + struct device_attribute *attr,
> + char *buf)
> +{
> + uint8_t mode;
> +
> + struct platform_device *pdev = to_platform_device(dev);
> + struct cma3000_accl_data *data = platform_get_drvdata(pdev);
> +
> + mode = cma3000_read(data, CMA3000_FFTHR, "ffthr");
> + if (mode < 0)
> + return mode;
> +
> + return sprintf(buf, "%d\n", mode);
> +}
> +
> +static ssize_t cma3000_store_attr_ffthr(struct device *dev,
> + struct device_attribute *attr,
> + const char *buf, size_t count)
> +{
> + struct platform_device *pdev = to_platform_device(dev);
> + struct cma3000_accl_data *data = platform_get_drvdata(pdev);
> + unsigned long val;
> + int error;
> +
> + error = strict_strtoul(buf, 0, &val);
> + if (error)
> + return error;
> +
> + mutex_lock(&data->mutex);
> + data->pdata.ffthr = val;
> + disable_irq(data->client->irq);
> + error = cma3000_set(data, CMA3000_FFTHR, val, "mdthr");
> + enable_irq(data->client->irq);
> + mutex_unlock(&data->mutex);
> +
> + /* If there was error during write, return error */
> + if (error < 0)
> + return error;
> + else
> + return count;
> +}
> +
> +static DEVICE_ATTR(mode, S_IWUSR | S_IRUGO,
> + cma3000_show_attr_mode, cma3000_store_attr_mode);
> +
> +static DEVICE_ATTR(grange, S_IWUSR | S_IRUGO,
> + cma3000_show_attr_grange, cma3000_store_attr_grange);
> +
> +static DEVICE_ATTR(mdthr, S_IWUSR | S_IRUGO,
> + cma3000_show_attr_mdthr, cma3000_store_attr_mdthr);
> +
> +static DEVICE_ATTR(mdfftmr, S_IWUSR | S_IRUGO,
> + cma3000_show_attr_mdfftmr, cma3000_store_attr_mdfftmr);
> +
> +static DEVICE_ATTR(ffthr, S_IWUSR | S_IRUGO,
> + cma3000_show_attr_ffthr, cma3000_store_attr_ffthr);
> +
> +
> +static struct attribute *cma_attrs[] = {
> + &dev_attr_mode.attr,
> + &dev_attr_grange.attr,
> + &dev_attr_mdthr.attr,
> + &dev_attr_mdfftmr.attr,
> + &dev_attr_ffthr.attr,
> + NULL,
> +};
> +
> +static struct attribute_group cma3000_attr_group = {
> + .attrs = cma_attrs,
> +};
> +
> +static void decode_mg(struct cma3000_accl_data *data, int *datax,
> + int *datay, int *dataz)
> +{
> + /* Data in 2's complement, convert to mg */
> + *datax = (((s8)(*datax)) * (data->bit_to_mg));
> + *datay = (((s8)(*datay)) * (data->bit_to_mg));
> + *dataz = (((s8)(*dataz)) * (data->bit_to_mg));
> +}
> +
> +static irqreturn_t cma3000_thread_irq(int irq, void *dev_id)
> +{
> + struct cma3000_accl_data *data = dev_id;
> + int datax, datay, dataz;
> + u8 ctrl, mode, range, intr_status;
> +
> + intr_status = cma3000_read(data, CMA3000_INTSTATUS, "interrupt status");
> + if (intr_status < 0)
> + return IRQ_NONE;
> +
> + /* Check if free fall is detected, report immediately */
> + if (intr_status & CMA3000_INTSTATUS_FFDET) {
> + input_report_abs(data->input_dev, ABS_MISC, 1);
> + input_sync(data->input_dev);
> + } else {
> + input_report_abs(data->input_dev, ABS_MISC, 0);
> + }
> +
> + datax = cma3000_read(data, CMA3000_DOUTX, "X");
> + datay = cma3000_read(data, CMA3000_DOUTY, "Y");
> + dataz = cma3000_read(data, CMA3000_DOUTZ, "Z");
> +
> + ctrl = cma3000_read(data, CMA3000_CTRL, "ctrl");
> + mode = (ctrl & CMA3000_MODEMASK) >> 1;
> + range = (ctrl & CMA3000_GRANGEMASK) >> 7;
> +
> + data->bit_to_mg = mode_to_mg[mode][range];
> +
> + /* Interrupt not for this device */
> + if (data->bit_to_mg == 0)
> + return IRQ_NONE;
> +
> + /* Decode register values to milli g */
> + decode_mg(data, &datax, &datay, &dataz);
> +
> + input_report_abs(data->input_dev, ABS_X, datax);
> + input_report_abs(data->input_dev, ABS_Y, datay);
> + input_report_abs(data->input_dev, ABS_Z, dataz);
> + input_sync(data->input_dev);
> +
> + return IRQ_HANDLED;
> +}
> +
> +static int cma3000_reset(struct cma3000_accl_data *data)
> +{
> + int ret;
> +
> + /* Reset sequence */
> + cma3000_set(data, CMA3000_RSTR, 0x02, "Reset");
> + cma3000_set(data, CMA3000_RSTR, 0x0A, "Reset");
> + cma3000_set(data, CMA3000_RSTR, 0x04, "Reset");
> +
> + /* Settling time delay */
> + mdelay(10);
> +
> + ret = cma3000_read(data, CMA3000_STATUS, "Status");
> + if (ret < 0) {
> + dev_err(&data->client->dev, "Reset failed\n");
> + return ret;
> + } else if (ret & CMA3000_STATUS_PERR) {
> + dev_err(&data->client->dev, "Parity Error\n");
> + return -EIO;
> + } else {
> + return 0;
> + }
> +}
> +
> +int cma3000_poweron(struct cma3000_accl_data *data)
> +{
> + uint8_t ctrl = 0, mdthr, mdfftmr, ffthr, mode;
> + int g_range, ret;
> +
> + g_range = data->pdata.g_range;
> + mode = data->pdata.mode;
> + mdthr = data->pdata.mdthr;
> + mdfftmr = data->pdata.mdfftmr;
> + ffthr = data->pdata.ffthr;
> +
> + if (mode < CMAMODE_DEFAULT || mode > CMAMODE_POFF) {
> + data->pdata.mode = CMAMODE_MOTDET;
> + mode = data->pdata.mode;
> + dev_info(&data->client->dev,
> + "Invalid mode specified, assuming Motion Detect\n");
> + }
> +
> + if (g_range == CMARANGE_2G) {
> + ctrl = (mode << 1) | CMA3000_RANGE2G;
> + } else if (g_range == CMARANGE_8G) {
> + ctrl = (mode << 1) | CMA3000_RANGE8G;
> + } else {
> + dev_info(&data->client->dev,
> + "Invalid G range specified, assuming 8G\n");
> + ctrl = (mode << 1) | CMA3000_RANGE8G;
> + data->pdata.g_range = CMARANGE_8G;
> + }
> +#ifdef CONFIG_INPUT_CMA3000_I2C
> + ctrl |= CMA3000_BUSI2C;
> +#endif
> +
> + cma3000_set(data, CMA3000_MDTHR, mdthr, "Motion Detect Threshold");
> + cma3000_set(data, CMA3000_MDFFTMR, mdfftmr, "Time register");
> + cma3000_set(data, CMA3000_FFTHR, ffthr, "Free fall threshold");
> + ret = cma3000_set(data, CMA3000_CTRL, ctrl, "Mode setting");
> + if (ret < 0)
> + return -EIO;
> +
> + mdelay(CMA3000_SETDELAY);
> +
> + return 0;
> +}
> +
> +int cma3000_poweroff(struct cma3000_accl_data *data)
> +{
> + int ret;
> +
> + ret = cma3000_set(data, CMA3000_CTRL, CMAMODE_POFF, "Mode setting");
> + mdelay(CMA3000_SETDELAY);
> +
> + return ret;
> +}
> +
> +int cma3000_init(struct cma3000_accl_data *data)
> +{
> + int ret = 0, fuzz_x, fuzz_y, fuzz_z, g_range;
> + uint32_t irqflags;
> +
> + if (data->client->dev.platform_data == NULL) {
> + dev_err(&data->client->dev, "platform data not found\n");
> + goto err_op2_failed;
> + }
> +
> + memcpy(&(data->pdata), data->client->dev.platform_data,
> + sizeof(struct cma3000_platform_data));
> +
> + ret = cma3000_reset(data);
> + if (ret)
> + goto err_op2_failed;
> +
> + ret = cma3000_read(data, CMA3000_REVID, "Revid");
> + if (ret < 0)
> + goto err_op2_failed;
> +
> + pr_info("CMA3000 Acclerometer : Revision %x\n", ret);
> +
> + /* Bring it out of default power down state */
> + ret = cma3000_poweron(data);
> + if (ret < 0)
> + goto err_op2_failed;
> +
> + fuzz_x = data->pdata.fuzz_x;
> + fuzz_y = data->pdata.fuzz_y;
> + fuzz_z = data->pdata.fuzz_z;
> + g_range = data->pdata.g_range;
> + irqflags = data->pdata.irqflags;
> +
> + data->input_dev = input_allocate_device();
> + if (data->input_dev == NULL) {
> + ret = -ENOMEM;
> + dev_err(&data->client->dev,
> + "Failed to allocate input device\n");
> + goto err_op2_failed;
> + }
> +
> + data->input_dev->name = "cma3000-acclerometer";
> +
> +#ifdef CONFIG_INPUT_CMA3000_I2C
> + data->input_dev->id.bustype = BUS_I2C;
> +#endif
> +
> + __set_bit(EV_ABS, data->input_dev->evbit);
> + __set_bit(EV_MSC, data->input_dev->evbit);
> +
> + input_set_abs_params(data->input_dev, ABS_X, -g_range,
> + g_range, fuzz_x, 0);
> + input_set_abs_params(data->input_dev, ABS_Y, -g_range,
> + g_range, fuzz_y, 0);
> + input_set_abs_params(data->input_dev, ABS_Z, -g_range,
> + g_range, fuzz_z, 0);
> + input_set_abs_params(data->input_dev, ABS_MISC, 0,
> + 1, 0, 0);
> +
> + ret = input_register_device(data->input_dev);
> + if (ret) {
> + dev_err(&data->client->dev,
> + "Unable to register input device\n");
> + goto err_op2_failed;
> + }
> +
> + mutex_init(&data->mutex);
> +
> + if (data->client->irq) {
> + ret = request_threaded_irq(data->client->irq, NULL,
> + cma3000_thread_irq,
> + irqflags | IRQF_ONESHOT,
> + data->client->name, data);
> +
> + if (ret < 0) {
> + dev_err(&data->client->dev,
> + "request_threaded_irq failed\n");
> + goto err_op1_failed;
> + }
> + }

What is the utility of the driver when there is no IRQ line?

> +
> + ret = sysfs_create_group(&data->client->dev.kobj, &cma3000_attr_group);
> + if (ret) {
> + dev_err(&data->client->dev,
> + "failed to create sysfs entries\n");
> + goto err_op1_failed;
> + }
> + return 0;
> +
> +err_op1_failed:
> + mutex_destroy(&data->mutex);
> + input_unregister_device(data->input_dev);
> +err_op2_failed:
> + if (data != NULL) {

How can data be NULL here?

> + if (data->input_dev != NULL)
> + input_free_device(data->input_dev);

Do not call input_free_device() after input_unregister_device().

> + }
> +
> + return ret;
> +}
> +
> +int cma3000_exit(struct cma3000_accl_data *data)
> +{
> + int ret;
> +
> + ret = cma3000_poweroff(data);
> +
> + if (data->client->irq)
> + free_irq(data->client->irq, data);
> +
> + mutex_destroy(&data->mutex);
> + input_unregister_device(data->input_dev);
> + input_free_device(data->input_dev);

You should not call input_free_device() after input_unregister_device().

> + sysfs_remove_group(&data->client->dev.kobj, &cma3000_attr_group);

I'd move this up, before you marked mutex as destroyed...

> + return ret;
> +}
> diff --git a/drivers/input/misc/cma3000_d0x.h b/drivers/input/misc/cma3000_d0x.h
> new file mode 100644
> index 0000000..12a8faf
> --- /dev/null
> +++ b/drivers/input/misc/cma3000_d0x.h
> @@ -0,0 +1,46 @@
> +/*
> + * cma3000_d0x.h
> + * VTI CMA3000_D0x Accelerometer driver
> + *
> + * Copyright (C) 2010 Texas Instruments
> + * Author: Hemanth V <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License version 2 as published by
> + * the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef INPUT_CMA3000_H
> +#define INPUT_CMA3000_H
> +
> +#include <linux/i2c.h>
> +#include <linux/input.h>
> +
> +struct cma3000_accl_data {
> +#ifdef CONFIG_INPUT_CMA3000_I2C
> + struct i2c_client *client;
> +#endif
> + struct input_dev *input_dev;
> + struct cma3000_platform_data pdata;
> +
> + /* mutex for sysfs operations */
> + struct mutex mutex;
> + int bit_to_mg;
> +};
> +
> +int cma3000_set(struct cma3000_accl_data *, u8, u8, char *);
> +int cma3000_read(struct cma3000_accl_data *, u8, char *);
> +int cma3000_init(struct cma3000_accl_data *);
> +int cma3000_exit(struct cma3000_accl_data *);
> +int cma3000_poweron(struct cma3000_accl_data *);
> +int cma3000_poweroff(struct cma3000_accl_data *);
> +
> +#endif
> diff --git a/drivers/input/misc/cma3000_d0x_i2c.c b/drivers/input/misc/cma3000_d0x_i2c.c
> new file mode 100644
> index 0000000..41f845c
> --- /dev/null
> +++ b/drivers/input/misc/cma3000_d0x_i2c.c
> @@ -0,0 +1,136 @@
> +/*
> + * cma3000_d0x_i2c.c
> + *
> + * Implements I2C interface for VTI CMA300_D0x Accelerometer driver
> + *
> + * Copyright (C) 2010 Texas Instruments
> + * Author: Hemanth V <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License version 2 as published by
> + * the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/i2c.h>
> +#include <linux/i2c/cma3000.h>
> +#include "cma3000_d0x.h"
> +
> +int cma3000_set(struct cma3000_accl_data *accl, u8 reg, u8 val, char *msg)
> +{
> + int ret = i2c_smbus_write_byte_data(accl->client, reg, val);
> + if (ret < 0)
> + dev_err(&accl->client->dev,
> + "i2c_smbus_write_byte_data failed (%s)\n", msg);
> + return ret;
> +}
> +
> +int cma3000_read(struct cma3000_accl_data *accl, u8 reg, char *msg)
> +{
> + int ret = i2c_smbus_read_byte_data(accl->client, reg);
> + if (ret < 0)
> + dev_err(&accl->client->dev,
> + "i2c_smbus_read_byte_data failed (%s)\n", msg);
> + return ret;
> +}
> +
> +static int __devinit cma3000_accl_probe(struct i2c_client *client,
> + const struct i2c_device_id *id)
> +{
> + int ret;
> + struct cma3000_accl_data *data = NULL;
> +
> + data = kzalloc(sizeof(*data), GFP_KERNEL);
> + if (data == NULL) {
> + ret = -ENOMEM;
> + goto err_op_failed;
> + }
> +
> + data->client = client;
> + i2c_set_clientdata(client, data);
> +
> + ret = cma3000_init(data);
> + if (ret)
> + goto err_op_failed;
> +
> + return 0;
> +
> +err_op_failed:
> +
> + if (data != NULL)
> + kfree(data);
> +
> + return ret;
> +}
> +
> +static int __devexit cma3000_accl_remove(struct i2c_client *client)
> +{
> + struct cma3000_accl_data *data = i2c_get_clientdata(client);
> + int ret;
> +
> + ret = cma3000_exit(data);
> + i2c_set_clientdata(client, NULL);
> + kfree(data);
> +
> + return ret;
> +}
> +
> +#ifdef CONFIG_PM
> +static int cma3000_accl_suspend(struct i2c_client *client, pm_message_t mesg)
> +{
> + struct cma3000_accl_data *data = i2c_get_clientdata(client);
> +
> + return cma3000_poweroff(data);
> +}
> +
> +static int cma3000_accl_resume(struct i2c_client *client)
> +{
> + struct cma3000_accl_data *data = i2c_get_clientdata(client);
> +
> + return cma3000_poweron(data);
> +}
> +#endif
> +
> +static const struct i2c_device_id cma3000_id[] = {
> + { "cma3000_accl", 0 },
> + { },
> +};
> +
> +static struct i2c_driver cma3000_accl_driver = {
> + .probe = cma3000_accl_probe,
> + .remove = cma3000_accl_remove,
> + .id_table = cma3000_id,
> +#ifdef CONFIG_PM
> + .suspend = cma3000_accl_suspend,
> + .resume = cma3000_accl_resume,
> +#endif
> + .driver = {
> + .name = "cma3000_accl"
> + },
> +};
> +
> +static int __init cma3000_accl_init(void)
> +{
> + return i2c_add_driver(&cma3000_accl_driver);
> +}
> +
> +static void __exit cma3000_accl_exit(void)
> +{
> + i2c_del_driver(&cma3000_accl_driver);
> +}
> +
> +module_init(cma3000_accl_init);
> +module_exit(cma3000_accl_exit);
> +
> +MODULE_DESCRIPTION("CMA3000-D0x Accelerometer Driver");
> +MODULE_LICENSE("GPL");
> +MODULE_AUTHOR("Hemanth V <[email protected]>");
> diff --git a/include/linux/i2c/cma3000.h b/include/linux/i2c/cma3000.h
> new file mode 100644
> index 0000000..50aa3fc
> --- /dev/null
> +++ b/include/linux/i2c/cma3000.h
> @@ -0,0 +1,60 @@
> +/*
> + * cma3000.h
> + * VTI CMA300_Dxx Accelerometer driver
> + *
> + * Copyright (C) 2010 Texas Instruments
> + * Author: Hemanth V <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License version 2 as published by
> + * the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef _LINUX_CMA3000_I2C_H
> +#define _LINUX_CMA3000_I2C_H
> +
> +#define CMAMODE_DEFAULT 0
> +#define CMAMODE_MEAS100 1
> +#define CMAMODE_MEAS400 2
> +#define CMAMODE_MEAS40 3
> +#define CMAMODE_MOTDET 4
> +#define CMAMODE_FF100 5
> +#define CMAMODE_FF400 6
> +#define CMAMODE_POFF 7
> +
> +#define CMARANGE_2G 2000
> +#define CMARANGE_8G 8000
> +
> +/**
> + * struct cma3000_i2c_platform_data - CMA3000 Platform data
> + * @fuzz_x: Noise on X Axis
> + * @fuzz_y: Noise on Y Axis
> + * @fuzz_z: Noise on Z Axis
> + * @g_range: G range in milli g i.e 2000 or 8000
> + * @mode: Operating mode
> + * @mdthr: Motion detect threshold value
> + * @mdfftmr: Motion detect and free fall time value
> + * @ffthr: Free fall threshold value
> + */
> +
> +struct cma3000_platform_data {
> + int fuzz_x;
> + int fuzz_y;
> + int fuzz_z;
> + int g_range;
> + uint8_t mode;
> + uint8_t mdthr;
> + uint8_t mdfftmr;
> + uint8_t ffthr;
> + uint32_t irqflags;
> +};

Do you expect SPI version to have significantly different platform data?
Should it be moved out of include/linux/i2c/?

Thanks.

--
Dmitry


2010-08-30 16:04:40

by Felipe Balbi

[permalink] [raw]
Subject: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

Hi Dmitry,

When we tried to push N900's accelerometer driver as an
input device you commented you didn't want sensors such
as accelerometers, magnetometers, proximity, etc on the
input layer because "they are not user input", although
I didn't fully agree with you, we had to modify the drivers
and, I believe, one of them is sitting in staging under
the industrial i/o subsystem.

Are you now accepting sensor drivers on the input layer ?
that will make our life a lot easier but we need some
definition to avoid having to re-work drivers when we
want to push them to mainline.

--
balbi

2010-08-30 16:29:07

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

Hi Felipe,

On Mon, Aug 30, 2010 at 11:04:39AM -0500, Felipe Balbi wrote:
> Hi Dmitry,
>
> When we tried to push N900's accelerometer driver as an
> input device you commented you didn't want sensors such
> as accelerometers, magnetometers, proximity, etc on the
> input layer because "they are not user input", although
> I didn't fully agree with you, we had to modify the drivers
> and, I believe, one of them is sitting in staging under
> the industrial i/o subsystem.
>
> Are you now accepting sensor drivers on the input layer ?
> that will make our life a lot easier but we need some
> definition to avoid having to re-work drivers when we
> want to push them to mainline.
>

I got persuaded that 3-axis accelerometers are most often indended to be
used as input devices so I decided I should take these in (adxl134x is
there). I still think that sensor devices in general are better suited
to IIO subsystem and I hope it will get out of staging soon.

Once it is out of staging we may think about creating a IIO-to-input
bridge (copuld be either in kernel or a userspace solution based on
uinput) to route sensors that are indeed used as HIDs.

Hope this makes sense.

--
Dmitry

2010-08-30 17:10:42

by Felipe Balbi

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

Hi,

On Mon, 30 Aug 2010 09:28:56 -0700, Dmitry Torokhov
<[email protected]> wrote:
>> When we tried to push N900's accelerometer driver as an
>> input device you commented you didn't want sensors such
>> as accelerometers, magnetometers, proximity, etc on the
>> input layer because "they are not user input", although
>> I didn't fully agree with you, we had to modify the drivers
>> and, I believe, one of them is sitting in staging under
>> the industrial i/o subsystem.
>>
>> Are you now accepting sensor drivers on the input layer ?
>> that will make our life a lot easier but we need some
>> definition to avoid having to re-work drivers when we
>> want to push them to mainline.
>>
>
> I got persuaded that 3-axis accelerometers are most often indended to be
> used as input devices so I decided I should take these in (adxl134x is
> there). I still think that sensor devices in general are better suited
> to IIO subsystem and I hope it will get out of staging soon.
>
> Once it is out of staging we may think about creating a IIO-to-input
> bridge (copuld be either in kernel or a userspace solution based on
> uinput) to route sensors that are indeed used as HIDs.
>
> Hope this makes sense.

It kinda does, but such sensors will be more and more used as
input devices, specially for gaming on mobile devices.

For example a proximity sensor might be used as the trigger
button on a first person shooting game; accelerometers will
be used to walk through the map and a magnetometer might be
used to look behind you and a gyroscope to turn around your
own axis.

In the end, the user is the one moving the device around and
generating such events, so why not avoiding yet another
subsystem if we will have to resort to solutions such as
iio-to-input bridge, which smells like a hackish solution
to get input events from sensors anyway.

I really hope I could convince you that, on mobile at least,
sensors will be mostly used as HID devices and will give
app developers new ways for them to allow users to interact
with their app.

Take a look at how a gyroscope is used on iphone, for
instance [1].

[1] http://www.youtube.com/watch?v=ORcu-c-qnjg

--
balbi

2010-08-30 17:21:51

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On Mon, Aug 30, 2010 at 12:10:41PM -0500, Felipe Balbi wrote:
> Hi,
>
> On Mon, 30 Aug 2010 09:28:56 -0700, Dmitry Torokhov
> <[email protected]> wrote:
> >> When we tried to push N900's accelerometer driver as an
> >> input device you commented you didn't want sensors such
> >> as accelerometers, magnetometers, proximity, etc on the
> >> input layer because "they are not user input", although
> >> I didn't fully agree with you, we had to modify the drivers
> >> and, I believe, one of them is sitting in staging under
> >> the industrial i/o subsystem.
> >>
> >> Are you now accepting sensor drivers on the input layer ?
> >> that will make our life a lot easier but we need some
> >> definition to avoid having to re-work drivers when we
> >> want to push them to mainline.
> >>
> >
> > I got persuaded that 3-axis accelerometers are most often indended to be
> > used as input devices so I decided I should take these in (adxl134x is
> > there). I still think that sensor devices in general are better suited
> > to IIO subsystem and I hope it will get out of staging soon.
> >
> > Once it is out of staging we may think about creating a IIO-to-input
> > bridge (copuld be either in kernel or a userspace solution based on
> > uinput) to route sensors that are indeed used as HIDs.
> >
> > Hope this makes sense.
>
> It kinda does, but such sensors will be more and more used as
> input devices, specially for gaming on mobile devices.
>
> For example a proximity sensor might be used as the trigger
> button on a first person shooting game; accelerometers will
> be used to walk through the map and a magnetometer might be
> used to look behind you and a gyroscope to turn around your
> own axis.
>
> In the end, the user is the one moving the device around and
> generating such events, so why not avoiding yet another
> subsystem if we will have to resort to solutions such as
> iio-to-input bridge, which smells like a hackish solution
> to get input events from sensors anyway.
>
> I really hope I could convince you that, on mobile at least,
> sensors will be mostly used as HID devices and will give
> app developers new ways for them to allow users to interact
> with their app.
>
> Take a look at how a gyroscope is used on iphone, for
> instance [1].
>
> [1] http://www.youtube.com/watch?v=ORcu-c-qnjg
>

My response to this - are gyroscopes will _only_ be used to turn around
in a game? Are proximity sensor is _only_ usable as a trigger in FPS?
Won't we ever see such chips controlling technological processes?

I do hope that answerrs are no, no and yes.

--
Dmitry

2010-08-30 17:37:12

by Jonathan Cameron

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On 08/30/10 18:10, Felipe Balbi wrote:
> Hi,
>
> On Mon, 30 Aug 2010 09:28:56 -0700, Dmitry Torokhov
> <[email protected]> wrote:
>>> When we tried to push N900's accelerometer driver as an
>>> input device you commented you didn't want sensors such
>>> as accelerometers, magnetometers, proximity, etc on the
>>> input layer because "they are not user input", although
>>> I didn't fully agree with you, we had to modify the drivers
>>> and, I believe, one of them is sitting in staging under
>>> the industrial i/o subsystem.
>>>
>>> Are you now accepting sensor drivers on the input layer ?
>>> that will make our life a lot easier but we need some
>>> definition to avoid having to re-work drivers when we
>>> want to push them to mainline.
>>>
>>
>> I got persuaded that 3-axis accelerometers are most often indended to be
>> used as input devices so I decided I should take these in (adxl134x is
>> there). I still think that sensor devices in general are better suited
>> to IIO subsystem and I hope it will get out of staging soon.
>>
>> Once it is out of staging we may think about creating a IIO-to-input
>> bridge (copuld be either in kernel or a userspace solution based on
>> uinput) to route sensors that are indeed used as HIDs.
I'll look into the userspace option. Might well be a quicker option
in the near future not to mention providing a good example of using
our userspace interfaces...
>>
>> Hope this makes sense.
>
> It kinda does, but such sensors will be more and more used as
> input devices, specially for gaming on mobile devices.
>
> For example a proximity sensor might be used as the trigger
> button on a first person shooting game; accelerometers will
> be used to walk through the map and a magnetometer might be
> used to look behind you and a gyroscope to turn around your
> own axis.
>
> In the end, the user is the one moving the device around and
> generating such events, so why not avoiding yet another
> subsystem if we will have to resort to solutions such as
> iio-to-input bridge, which smells like a hackish solution
> to get input events from sensors anyway.
>
> I really hope I could convince you that, on mobile at least,
> sensors will be mostly used as HID devices and will give
> app developers new ways for them to allow users to interact
> with their app.
>
> Take a look at how a gyroscope is used on iphone, for
> instance [1].
>
> [1] http://www.youtube.com/watch?v=ORcu-c-qnjg
>
Kind of an obvious point, but there are lots of sensors in these
classes that no one is ever going to use for input. Take
a high g acceleromter (anything above c.10g for this purpose)
or the sorts of IMU's IIO currently has, which are simply too
expensive / large. Or take mid to high spec ADCs.
For other uses the requirements are typically very
different and could not be added to input without major disruption.
Personally I see no problem with having two drivers in kernel for
a device assuming they are sufficiently different in purpose.

I agree we have some fragmentation here and ideally
these device might all sit on some 'uber' subsystem providing
all things to all men. Unfortunately that would probably come with
massive complexity and at the end of the day we already
have subsystems doing a very good job of handling some classes
of device. The intent of IIO was always to cover the ground
that is out of scope for hwmon and input. As you have probably
seen this does indeed add a fair bit of complexity!

We're working on cleaning things up. Large set of fixes on
linux-iio today (including a few abi fixes for
the magnetometer you mention above :) Still if anyone wants
to speed up our move to mainline we always appreciate code review!
*looks hopeful* I'm always happy to direct people towards the
most dubious bits or advise anyone porting a driver. We have
a reasonable team now (thanks partly to exposure we got by
being in staging!), but more help is always good.

Jonathan
(IIO 'maintainer' in case anyone didn't guess)

2010-08-30 18:52:18

by Felipe Balbi

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

Hi,

On Mon, 30 Aug 2010 10:21:44 -0700, Dmitry Torokhov
<[email protected]> wrote:
> My response to this - are gyroscopes will _only_ be used to turn around
> in a game? Are proximity sensor is _only_ usable as a trigger in FPS?
> Won't we ever see such chips controlling technological processes?

similarly, will accelerometers always be used as input devices ?
of course not, they have been used e.g. to spin down hard disks
on laptops when they are shaken too hard. Still they have quite a
fair bit of usage as input devices; you've seen it yourself,
right ?

in the end of the day, when you put those on mobile devices
like e.g. cellphones, app developers will be really keen on
creating new ways for interacting with apps (be it a game
or not) using those devices, so I will agree with Jonathan
that, maybe, having two separate drivers for different
purposes would make sense, although that might cause a bit
of trouble if user ends up enabling the wrong driver when
building custom kernel for his device.

My hope is that we can make use of a well known and uniform
API for all input devices in a device, be it a keypad,
touchscreen, accelerometer, magnetometer, gyro, or whatever.

--
balbi

2010-08-30 20:22:36

by Alan

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On Mon, 30 Aug 2010 11:04:39 -0500
Felipe Balbi <[email protected]> wrote:

> Hi Dmitry,
>
> When we tried to push N900's accelerometer driver as an
> input device you commented you didn't want sensors such
> as accelerometers, magnetometers, proximity, etc on the
> input layer because "they are not user input", although
> I didn't fully agree with you, we had to modify the drivers
> and, I believe, one of them is sitting in staging under
> the industrial i/o subsystem.
>
> Are you now accepting sensor drivers on the input layer ?
> that will make our life a lot easier but we need some
> definition to avoid having to re-work drivers when we
> want to push them to mainline.

I would certainly vote for them being input when they are sometimes used
that way - compasses for example do get used by applications (like
compass programs, some of the real cool visualisation tools and things
like live/game mixed gaming environments) and accelerometers are gaming
inputs. Proximity is also input for some stuff although usually of much
more interest to the GUI manager than the GUI apps.

ALS is more of a dual purpose thing -light levels are input features to
the GUI on PDAs, although on many embedded devices they are most
definitely part of the IIO subsystem.


2010-08-30 20:44:20

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On Mon, Aug 30, 2010 at 09:40:25PM +0100, Alan Cox wrote:
> On Mon, 30 Aug 2010 11:04:39 -0500
> Felipe Balbi <[email protected]> wrote:
>
> > Hi Dmitry,
> >
> > When we tried to push N900's accelerometer driver as an
> > input device you commented you didn't want sensors such
> > as accelerometers, magnetometers, proximity, etc on the
> > input layer because "they are not user input", although
> > I didn't fully agree with you, we had to modify the drivers
> > and, I believe, one of them is sitting in staging under
> > the industrial i/o subsystem.
> >
> > Are you now accepting sensor drivers on the input layer ?
> > that will make our life a lot easier but we need some
> > definition to avoid having to re-work drivers when we
> > want to push them to mainline.
>
> I would certainly vote for them being input when they are sometimes used
> that way - compasses for example do get used by applications (like
> compass programs, some of the real cool visualisation tools and things
> like live/game mixed gaming environments) and accelerometers are gaming
> inputs.

But do you believe that input should be the "primary residence" for the
devices when they are only _sometimes_ used as input devices? Or it
would make sense to employ a converter from XXX to input (either purely
in-kernel or userspace over uinput)?

Thanks.

--
Dmitry

2010-08-30 20:50:18

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On Mon, Aug 30, 2010 at 01:52:18PM -0500, Felipe Balbi wrote:
> Hi,
>
> On Mon, 30 Aug 2010 10:21:44 -0700, Dmitry Torokhov
> <[email protected]> wrote:
> > My response to this - are gyroscopes will _only_ be used to turn around
> > in a game? Are proximity sensor is _only_ usable as a trigger in FPS?
> > Won't we ever see such chips controlling technological processes?
>
> similarly, will accelerometers always be used as input devices ?
> of course not, they have been used e.g. to spin down hard disks
> on laptops when they are shaken too hard. Still they have quite a
> fair bit of usage as input devices; you've seen it yourself,
> right ?
>

This is a fair point. I was told that the main purpose for the chips in
question (specifically 3-axis accelerometers) is to be used as HIDs.
But it could be that I should not have merged adxl and instead waited
for IIO.

> in the end of the day, when you put those on mobile devices
> like e.g. cellphones, app developers will be really keen on
> creating new ways for interacting with apps (be it a game
> or not) using those devices, so I will agree with Jonathan
> that, maybe, having two separate drivers for different
> purposes would make sense, although that might cause a bit
> of trouble if user ends up enabling the wrong driver when
> building custom kernel for his device.

No, I do not believe that maintaining 2 separate devices for the same
hardware with the only difference is how data is presented to userspace
is a viable option.

>
> My hope is that we can make use of a well known and uniform
> API for all input devices in a device, be it a keypad,
> touchscreen, accelerometer, magnetometer, gyro, or whatever.
>

If only we could agree what input devices are...

--
Dmitry

2010-08-30 21:28:41

by Linus Torvalds

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On Monday, August 30, 2010, Dmitry Torokhov <[email protected]> wrote:
>
> But do you believe that input should be the "primary residence" for the
> devices when they are only _sometimes_ used as input devices? Or it
> would make sense to employ a converter from XXX to input (either purely
> in-kernel or userspace over uinput)?

Umm... You've brought that up before as an objection, but what _is_
that other model that you would convert from? IOW what *is* that XXX
that you talk about?

So I think accelerometers etc should be seen as input devices for the
simple reason that

(a) They really *are* input devices in all the most common cases. If
you have a phone with an accelerometer, it really is used as an input
device quite like a joystick.

The fact that there are specialized and rare cases where people may
have some fancier accelerometer that isn't necessarily seen that way,
and where it is used for some fancy scientific experiment or whatever
doesn't change this in *any* way. The common case that almost
everybody cares about - and that is getting more common - is the
simple and obvious input case.

How is a Wii accelerometer in any way different from a joystick? How
is a phone accelerometer any different from one? The answer is clear:
they aren't different! So it makes 100% sense to expose them under the
same subsystem.

(b) You cannot even name your XXX thing. It clearly makes sense (at
least within Tue context of a driver for some embedded phone chip) to
see it as an input device. And nothing else comes to mind. You'd have
to expose it as some random character device and then everybody would
just have to make it emulate an input device in user space anyway.
What's the point of that?

None. That's what the point is.

So I really don't understand why you're fighting the input device
angle. It makes sense as an input device. It does NOT make sense as
anything else.

Really - what else could a phone accelerometer (or GPS device, for
that matter) really be? It very much is about user input - even if it
isn't a keyboard.

Linus

2010-08-30 21:44:04

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On Mon, Aug 30, 2010 at 02:28:37PM -0700, Linus Torvalds wrote:
> On Monday, August 30, 2010, Dmitry Torokhov <[email protected]> wrote:
> >
> > But do you believe that input should be the "primary residence" for the
> > devices when they are only _sometimes_ used as input devices? Or it
> > would make sense to employ a converter from XXX to input (either purely
> > in-kernel or userspace over uinput)?
>
> Umm... You've brought that up before as an objection, but what _is_
> that other model that you would convert from? IOW what *is* that XXX
> that you talk about?
>

IIO which is currently in staging.

> So I think accelerometers etc should be seen as input devices for the
> simple reason that
>
> (a) They really *are* input devices in all the most common cases. If
> you have a phone with an accelerometer, it really is used as an input
> device quite like a joystick.
>
> The fact that there are specialized and rare cases where people may
> have some fancier accelerometer that isn't necessarily seen that way,
> and where it is used for some fancy scientific experiment or whatever
> doesn't change this in *any* way. The common case that almost
> everybody cares about - and that is getting more common - is the
> simple and obvious input case.
>
> How is a Wii accelerometer in any way different from a joystick? How
> is a phone accelerometer any different from one? The answer is clear:
> they aren't different! So it makes 100% sense to expose them under the
> same subsystem.
>
> (b) You cannot even name your XXX thing. It clearly makes sense (at

I can - see above.

> least within Tue context of a driver for some embedded phone chip) to
> see it as an input device. And nothing else comes to mind. You'd have
> to expose it as some random character device and then everybody would
> just have to make it emulate an input device in user space anyway.
> What's the point of that?
>
> None. That's what the point is.
>
> So I really don't understand why you're fighting the input device
> angle. It makes sense as an input device. It does NOT make sense as
> anything else.
>
> Really - what else could a phone accelerometer

That is why I started taking accelerometers in. But I am concerned that
taking accelerometers (which indeed are most often input devices) will
lead people to try and use the same for temperature, ALS and other
sensors that are more often used in industrial process controls.

> (or GPS device, for that matter) really be?

But why GPS should be input device? It has nothing to do with user
input.

> It very much is about user input - even if it
> isn't a keyboard.

--
Dmitry

2010-08-30 22:06:08

by Linus Torvalds

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On Monday, August 30, 2010, Dmitry Torokhov <[email protected]> wrote:
>
> That is why I started taking accelerometers in. But I am concerned that
> taking accelerometers (which indeed are most often input devices) will
> lead people to try and use the same for temperature, ALS and other
> sensors that are more often used in industrial process controls.

You're just being silly.

Nobody writes a driver for a "temperature sensor" or "ambient light
sensor". They write a driver for a specific *chip* that is used in
cellphones etc, and that happens to have an ambient light and
temperature sensor on it.

And in that context, it really does make sense to see it as an input
driver. And the fact that there are industrial uses for ALS sensors
that aren't necessarily at all interested on the input layer should
not matter at all.

So don't bring up "ALS isn't always input" because within the context
of a driver for some highly integrated cellphone model, it really IS
input, and there is no ambiguity at all.

So your "sometimes it is, and sometimes it isn't" argument is bogus.
The ambiguity simply doesn't exist when seen in context.

>> (or GPS device, for that matter) really be?
>
> But why GPS should be input device? It has nothing to do with user
> input.

What? OF COURSE it is an input driver. It's the user moving the device
around. It's EXACTLY the same thing as an accelerometer in that
respect. Sure, it's a bit less precise and measures movement wrt some
external frame, but technically they are almost exactly the same.

If you se doing a navigation app, the accelerometer, the compass and
the GPS are all equally (but differently) important.

Again - it's not a user touching buttons. But it IS a user moving around.

Linus

2010-08-30 22:44:03

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On Mon, Aug 30, 2010 at 03:05:32PM -0700, Linus Torvalds wrote:
> On Monday, August 30, 2010, Dmitry Torokhov <[email protected]> wrote:
> >
> > That is why I started taking accelerometers in. But I am concerned that
> > taking accelerometers (which indeed are most often input devices) will
> > lead people to try and use the same for temperature, ALS and other
> > sensors that are more often used in industrial process controls.
>
> You're just being silly.
>
> Nobody writes a driver for a "temperature sensor" or "ambient light
> sensor". They write a driver for a specific *chip* that is used in
> cellphones etc, and that happens to have an ambient light and
> temperature sensor on it.

And? Yes, they have a ALS and temperature sensors. They also have
voltage regulators, pwm, ADC and a bunch of other stuff wired on. You
are not arguing that all of those should be input devices just because
they happen to reside on the same chip?

>
> And in that context, it really does make sense to see it as an input
> driver. And the fact that there are industrial uses for ALS sensors
> that aren't necessarily at all interested on the input layer should
> not matter at all.

I think it does matter; we should have standard interface for certain
functionality that makes sense for everyone. So far cellphone guys
wanted to plumb such devices through input not necessarily because these
are HID events but because:

1. Input transport via evdev is very convenient
2. There is no other standard alternative

Once there is standard interface for such sensors they will happily use
it and will not look back.

>
> So don't bring up "ALS isn't always input" because within the context
> of a driver for some highly integrated cellphone model, it really IS
> input, and there is no ambiguity at all.
>
> So your "sometimes it is, and sometimes it isn't" argument is bogus.
> The ambiguity simply doesn't exist when seen in context.

Sure, for a particular cell phone there is no ambiguity, the sensor has
certain functionality assigned that is well known. But does this mean
that we should not expect parts being reused at all anymore?

>
> >> (or GPS device, for that matter) really be?
> >
> > But why GPS should be input device? It has nothing to do with user
> > input.
>
> What? OF COURSE it is an input driver. It's the user moving the device
> around. It's EXACTLY the same thing as an accelerometer in that
> respect. Sure, it's a bit less precise and measures movement wrt some
> external frame, but technically they are almost exactly the same.
>
> If you se doing a navigation app, the accelerometer, the compass and
> the GPS are all equally (but differently) important.
>
> Again - it's not a user touching buttons. But it IS a user moving around.

Right, but do you expect that movement to cause an immediate action?
When you press a key - something happens; when you swing a Wii
controller - the device reacts. And really you can swap joysticks,
motion controllers, Sony's 6-axis and so forth and the application would
be hard pressed to tell the difference. I am unsure how you would play a
game with GPS as an input device.

--
Dmitry

2010-08-31 05:15:26

by Felipe Balbi

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

Hi,

On Mon, 30 Aug 2010 15:43:52 -0700, Dmitry Torokhov
<[email protected]> wrote:
> On Mon, Aug 30, 2010 at 03:05:32PM -0700, Linus Torvalds wrote:
>> On Monday, August 30, 2010, Dmitry Torokhov <[email protected]>
>> wrote:
>> >
>> > That is why I started taking accelerometers in. But I am concerned
that
>> > taking accelerometers (which indeed are most often input devices)
will
>> > lead people to try and use the same for temperature, ALS and other
>> > sensors that are more often used in industrial process controls.
>>
>> You're just being silly.
>>
>> Nobody writes a driver for a "temperature sensor" or "ambient light
>> sensor". They write a driver for a specific *chip* that is used in
>> cellphones etc, and that happens to have an ambient light and
>> temperature sensor on it.
>
> And? Yes, they have a ALS and temperature sensors. They also have
> voltage regulators, pwm, ADC and a bunch of other stuff wired on. You
> are not arguing that all of those should be input devices just because
> they happen to reside on the same chip?
>
>>
>> And in that context, it really does make sense to see it as an input
>> driver. And the fact that there are industrial uses for ALS sensors
>> that aren't necessarily at all interested on the input layer should
>> not matter at all.
>
> I think it does matter; we should have standard interface for certain
> functionality that makes sense for everyone. So far cellphone guys
> wanted to plumb such devices through input not necessarily because these
> are HID events but because:
>
> 1. Input transport via evdev is very convenient
> 2. There is no other standard alternative
>
> Once there is standard interface for such sensors they will happily use
> it and will not look back.

I disagree. If the device is used as input and currently
app developers have to find other ways to use such input
data, having a standard interface other than input layer
will just make the mess standardized, but it will still
be a mess.

>> So don't bring up "ALS isn't always input" because within the context
>> of a driver for some highly integrated cellphone model, it really IS
>> input, and there is no ambiguity at all.
>>
>> So your "sometimes it is, and sometimes it isn't" argument is bogus.
>> The ambiguity simply doesn't exist when seen in context.
>
> Sure, for a particular cell phone there is no ambiguity, the sensor has
> certain functionality assigned that is well known. But does this mean
> that we should not expect parts being reused at all anymore?

Why are we trying to overengineer on such simple devices ? Wasn't
it so that we will design and implement solutions when the problem
arrives ? Why are we trying to think of all cases such a simple
device might be used just for the sake of not being input device ?

>> >> (or GPS device, for that matter) really be?
>> >
>> > But why GPS should be input device? It has nothing to do with user
>> > input.
>>
>> What? OF COURSE it is an input driver. It's the user moving the device
>> around. It's EXACTLY the same thing as an accelerometer in that
>> respect. Sure, it's a bit less precise and measures movement wrt some
>> external frame, but technically they are almost exactly the same.
>>
>> If you se doing a navigation app, the accelerometer, the compass and
>> the GPS are all equally (but differently) important.
>>
>> Again - it's not a user touching buttons. But it IS a user moving
around.
>
> Right, but do you expect that movement to cause an immediate action?
> When you press a key - something happens; when you swing a Wii
> controller - the device reacts. And really you can swap joysticks,
> motion controllers, Sony's 6-axis and so forth and the application would
> be hard pressed to tell the difference. I am unsure how you would play a
> game with GPS as an input device.

it might not be used on a game, but on a map application, that IS input.
You're failing to see input devices are used in cases other than gaming.

--
balbi

2010-08-31 09:27:17

by Alan

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

> 1. Input transport via evdev is very convenient
> 2. There is no other standard alternative
>
> Once there is standard interface for such sensors they will happily use
> it and will not look back.

I think the fact that most of the interest in IIO is "how do we make an
IIO/Input bridge" speaks volumes.

> Sure, for a particular cell phone there is no ambiguity, the sensor has
> certain functionality assigned that is well known. But does this mean
> that we should not expect parts being reused at all anymore?

If non-input uses later need non-input interfaces they can switch to that
with an input bridge when there is one and when it happens, which
probably won't.

> I am unsure how you would play a game with GPS as an input device.

In a non-game context take a look at things like the British Museum
application that allows you to view wherever you are and as it was long
ago by fishing out a relevant photograph as you walk around. In a game
context can I suggest the Zombies game is an example ?

Alan

2010-08-31 09:29:22

by Alan

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

> IIO which is currently in staging.

Except we had ALS before that as a layer and Linus vetoed it. So there is
zero faith in IIO ever going anywhere.

Instead we now have about ten different light sensor APIs to the point
developers are writing a toolkit userspace plugin for *each* sensor.

2010-08-31 09:35:38

by Alan

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

> > My hope is that we can make use of a well known and uniform
> > API for all input devices in a device, be it a keypad,
> > touchscreen, accelerometer, magnetometer, gyro, or whatever.
> >
>
> If only we could agree what input devices are...

Is that the right test for some of these devices.

Surely the question is "what devices can be meaningfully represented by
the input API".

The device range is always going to be quite large and people want to use
the API because it means things just work. They can wire their home made
surfboard unit and tilt sensors up to the PC and tuxracer just goes. They
can wire pedals and a current meter to it and use it as the speed input in
bzflag to simulate bicycle tanks etc..

Alan

2010-08-31 12:31:10

by Jonathan Cameron

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On 08/31/10 10:44, Alan Cox wrote:
>> 1. Input transport via evdev is very convenient
>> 2. There is no other standard alternative
>>
>> Once there is standard interface for such sensors they will happily use
>> it and will not look back.
>
> I think the fact that most of the interest in IIO is "how do we make an
> IIO/Input bridge" speaks volumes.
It isn't. Most of the interest on LKML might be, but that is because all
of those interested in the 'industrial' side of things are keeping
their discussion on linux-iio@vger list. Most of the noise on LKML may
be on the input bridge side, but most of the actual work and current
developers / users are not. The needs we have are not all met by input,
(and nor should they be) hence the reason IIO exists.

Take a look at the work Analog have been doing with it. There are
few devices in their set that would fall into the blurred area we are
debating here. 3 phase energy meters for input anyone?
>
>> Sure, for a particular cell phone there is no ambiguity, the sensor has
>> certain functionality assigned that is well known. But does this mean
>> that we should not expect parts being reused at all anymore?
>
> If non-input uses later need non-input interfaces they can switch to that
> with an input bridge when there is one and when it happens, which
> probably won't.
I guess I'll just have to write the bridge :) To put in userspace
code for a particular device should be trivial. It may take a little
more time and thought to get a configurable general version in place.

It may not be the right option for some devices, but it will provide
a means if someone wants to take one of Analog's rather nice IMU's
or a 200G accelerometer and use it to drive their gaming rig ;)
Also, there are always devices using analog sensors connected to
much more general purpose ADCs to further blur the boundaries.
There has to be divide somewhere. Dmitry is merely trying to avoid
a flood of inappropriate drivers.
>
>> I am unsure how you would play a game with GPS as an input device.
>
> In a non-game context take a look at things like the British Museum
> application that allows you to view wherever you are and as it was long
> ago by fishing out a relevant photograph as you walk around. In a game
> context can I suggest the Zombies game is an example ?
>
> Alan

Jonathan

2010-08-31 12:46:49

by Jonathan Cameron

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On 08/31/10 10:46, Alan Cox wrote:
>> IIO which is currently in staging.
>
> Except we had ALS before that as a layer and Linus vetoed it. So there is
> zero faith in IIO ever going anywhere.
I have more faith - those developing it have limited time, but we will get
there. (another plug for anyone interested to get involved!)

IIRC Linus and others disliked ALS for two reasons...
* It was too specific. They didn't want to fragment sensors types that much.
* Userspace is used to dealing input and in some cases a light sensor can look
like a switch.
The first certainly doesn't apply to IIO, the second will be fixed via an
input bridge.

If Linus isn't happy we'll just have to work on convincing him.
>
> Instead we now have about ten different light sensor APIs to the point
> developers are writing a toolkit userspace plugin for *each* sensor.
I agree. This is a big problem. We have in the past talked about
allowing interfaces to be standardized even if the underlying subsystem
is still open to debate.
We did openly debate the interface for some time with ALS...

After we went over this with IIO we decided to match / extend hwmon where
ever possible. Obviously that only covers sysfs interfaces, but it is a start.
We also openly debate all new elements (and in theory at least keep
the admittedly huge abi document up to date). A large set of doc updates and
code fixes relating to the interface from Manuel Stahl went to Greg KH
this morning as result of his work on general userspace tools.

On the chrdev side of things life is much more complex as performance
and overheads become an issue.

Jonathan

p.s. Matthias Nyman's email address is bouncing so I've removed it from the cc list.

2010-08-31 16:17:49

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On Tue, Aug 31, 2010 at 10:44:46AM +0100, Alan Cox wrote:
> > 1. Input transport via evdev is very convenient
> > 2. There is no other standard alternative
> >
> > Once there is standard interface for such sensors they will happily use
> > it and will not look back.
>
> I think the fact that most of the interest in IIO is "how do we make an
> IIO/Input bridge" speaks volumes.

Like Jonathan mentioned, we so far only hear from mobile users here on
LKML.

>
> > Sure, for a particular cell phone there is no ambiguity, the sensor has
> > certain functionality assigned that is well known. But does this mean
> > that we should not expect parts being reused at all anymore?
>
> If non-input uses later need non-input interfaces they can switch to that
> with an input bridge when there is one and when it happens, which
> probably won't.

Would there even be an argument which subsystem to use if IIO->input
bridge existed today? Because if the answer is "no" then push into input
is driven by convenience and not because it is the right solution.

>
> > I am unsure how you would play a game with GPS as an input device.
>
> In a non-game context take a look at things like the British Museum
> application that allows you to view wherever you are and as it was long
> ago by fishing out a relevant photograph as you walk around.

If application does take something as an input it does not make it
necessarily a human interface device. By this reasoning cameras should
be represented as an input devices (why, some applications take input
from it), hwmon should be input as well (detect your move from
Arkhangelsk to Cairo by changes in your chassis temperature while under
the same load?). Serial ports? Input. Sound - speech recognition should
be implemented as an input device converting microphone input directly
into EV_KEY/KEY_x stream bypassing sound subsystem completely? And if
someone decides to use it differently - why, let's just write a second
driver. This way is madness.

I really believe that input should represent purely human interface
devices, not arbitrary data acquisition devices.

> In a game
> context can I suggest the Zombies game is an example ?

I am not familiar with this game, sorry.

--
Dmitry

2010-08-31 16:42:12

by Alan

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

> > If non-input uses later need non-input interfaces they can switch to that
> > with an input bridge when there is one and when it happens, which
> > probably won't.
>
> Would there even be an argument which subsystem to use if IIO->input
> bridge existed today? Because if the answer is "no" then push into input
> is driven by convenience and not because it is the right solution.

Probably because most of these devices have nothing to do with industrial
I/O at all.

> If application does take something as an input it does not make it
> necessarily a human interface device. By this reasoning cameras should
> be represented as an input devices (why, some applications take input

That's not what I asked.

> I really believe that input should represent purely human interface
> devices, not arbitrary data acquisition devices.

That tends to make little sense where the API is the same and
applications benefit enormously from consistency. I'd rather have an
input->IIO bridge because that is the real world today !

The question is what does the API make *sense* for. Not what can you use
the API for. Unix (and Linux) are enormously powerful because of the use
of common interfaces and APIs.

So a voltmeter really makes no sense. It's not a set of keys and it
doesn't give X/Y/Z style readings. Nor does a camera. But a lot of things
do fit this to varying degrees.

I'm actually more dubious than Linus about ALS - because ALS tends not
produce 'events' but to be sampled, and there are significant power
implications to unnecessary polling.

See it as a curse of success - because you got the API right and made it
flexible people want to use it. And the more it's used the less special
code is needed in user or kernel space for PDAs and phones - instead they
just work.

> > In a game
> > context can I suggest the Zombies game is an example ?
>
> I am not familiar with this game, sorry.

It uses GPS and networking to stage an 'in real world' zombie dodging
game.

Alan

2010-08-31 17:09:30

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On Tue, Aug 31, 2010 at 05:59:37PM +0100, Alan Cox wrote:
> > > If non-input uses later need non-input interfaces they can switch to that
> > > with an input bridge when there is one and when it happens, which
> > > probably won't.
> >
> > Would there even be an argument which subsystem to use if IIO->input
> > bridge existed today? Because if the answer is "no" then push into input
> > is driven by convenience and not because it is the right solution.
>
> Probably because most of these devices have nothing to do with industrial
> I/O at all.

"Data acquisition devices" then?

>
> > If application does take something as an input it does not make it
> > necessarily a human interface device. By this reasoning cameras should
> > be represented as an input devices (why, some applications take input
>
> That's not what I asked.
>
> > I really believe that input should represent purely human interface
> > devices, not arbitrary data acquisition devices.
>
> That tends to make little sense where the API is the same and
> applications benefit enormously from consistency. I'd rather have an
> input->IIO bridge because that is the real world today !
>
> The question is what does the API make *sense* for. Not what can you use
> the API for. Unix (and Linux) are enormously powerful because of the use
> of common interfaces and APIs.
>
> So a voltmeter really makes no sense. It's not a set of keys and it
> doesn't give X/Y/Z style readings. Nor does a camera. But a lot of things
> do fit this to varying degrees.
>
> I'm actually more dubious than Linus about ALS - because ALS tends not
> produce 'events' but to be sampled, and there are significant power
> implications to unnecessary polling.
>
> See it as a curse of success - because you got the API right and made it
> flexible people want to use it.

I knew it! Its all Vojtech's fault.

> And the more it's used the less special
> code is needed in user or kernel space for PDAs and phones - instead they
> just work.

OK, so let's say we start moving some of the devices into input. Which
ones we consider suitable for input? I guess some 3-digit
accelerometers, what else? Also, what new event types would we need?
Let's take GPS - I do not think that ABS_X and ABS_Y are the best events
to be used for such devices: I am trying to allow applications being
ignorant of what exact device they are talking to and rather concentrate
on device capabilities (list of events supported). GPS is sufficiently
different from a tablet/touchscreen; while some might want to use both
as inputs to a game most applications would want to know which one
which.

Also, GPS, liek ALS, would probably be polling, no?

--
Dmitry

2010-08-31 17:24:45

by Mohamed Ikbel Boulabiar

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

IMHO I think sensors no more can be considered as non-input-devices.
Things changed too much in recent years. Input "sources" have now a
very different use as before (smartphones, Tablets and handheld
devices...)
They all have much inputs that come mostly from sensors.

So the definition of an input device is something that the user can
interact on it ?
Maybe we should consider input devices to be made from 1 to N sensors
with some filtering blocks which only expose the useful data.

If we think like this an input device can be made from sub parts which
can be bare sensors. (Many sensors are exposed as
Human.Interface.Devices which are mainly input devices)

i

2010-08-31 17:59:13

by Jonathan Cameron

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)


linux-iio cc'd for comments on the code dump at end of email..

On 08/31/10 17:59, Alan Cox wrote:
>>> If non-input uses later need non-input interfaces they can switch to that
>>> with an input bridge when there is one and when it happens, which
>>> probably won't.
>>
>> Would there even be an argument which subsystem to use if IIO->input
>> bridge existed today? Because if the answer is "no" then push into input
>> is driven by convenience and not because it is the right solution.
>
> Probably because most of these devices have nothing to do with industrial
> I/O at all.
Hmm.. The 'industrial' bit is somewhat misleading. Ultimately it was the
best name anyone came up with a while ago. Happy to change it if someone
gives me a better suggestion!
>
>> If application does take something as an input it does not make it
>> necessarily a human interface device. By this reasoning cameras should
>> be represented as an input devices (why, some applications take input
>
> That's not what I asked.
>
>> I really believe that input should represent purely human interface
>> devices, not arbitrary data acquisition devices.
>
> That tends to make little sense where the API is the same and
> applications benefit enormously from consistency. I'd rather have an
> input->IIO bridge because that is the real world today !
Really basic proof of concept at bottom of this email...
It's nasty and not remotely general. General version will be a bit
longer. This is just a hack combining the lis3l02dqbuffersimple.c example
and a uinput example googling gave me... I have no idea if I got the uinput
stuff right, but it's spitting out data that looks about right...

>
> The question is what does the API make *sense* for. Not what can you use
> the API for. Unix (and Linux) are enormously powerful because of the use
> of common interfaces and APIs.
>
> So a voltmeter really makes no sense. It's not a set of keys and it
> doesn't give X/Y/Z style readings. Nor does a camera. But a lot of things
> do fit this to varying degrees.
>
> I'm actually more dubious than Linus about ALS - because ALS tends not
> produce 'events' but to be sampled, and there are significant power
> implications to unnecessary polling.
It covered the drivers we had at the time. Sadly most ALS sensors don't
produce trivial, 'the light level has changed'. Actually the only one I think
did was the ACPI interface. They do provide interrupts, just one has to reset
suitable thresholds about the current light level. A game that isn't trivial
to get right in a driver. Not sure anyone has taken this on as yet....
>
> See it as a curse of success - because you got the API right and made it
> flexible people want to use it. And the more it's used the less special
> code is needed in user or kernel space for PDAs and phones - instead they
> just work.
>
>>> In a game
>>> context can I suggest the Zombies game is an example ?
>>
>> I am not familiar with this game, sorry.
>
> It uses GPS and networking to stage an 'in real world' zombie dodging
> game.
>

Anyhow, here is a code dump of a very nasty proof of concept for an IIO
to input bridge in userspace. If I'd known it would be this easy I'd
have done this ages ago. I'm away for next couple of days, so a more
general version won't occur until next week unless someone else picks it
up.

---

/* IIO to uinput userspace bridge example for specific device.
*
* Copyright (c) 2010 Jonathan Cameron
* Sometime in the past Luke Casson (or at least it is on his website)
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published by
* the Free Software Foundation.
*
* This program is to illustrate how an IIO to input bridge would more or less
* work.
* Some comments in line to say where there have been short cuts taken. Denote S:
*/

#include <dirent.h>
#include <fcntl.h>
#include <stdio.h>
#include <errno.h>
#include <sys/stat.h>
#include <sys/dir.h>
#include <linux/types.h>
#include "iio_utils.h"
#include <linux/input.h>
#include <linux/uinput.h>

int send_event(int fd, __u16 type, __u16 code, __s32 value)
{
struct input_event event;

memset(&event, 0, sizeof(event));
event.type = type;
event.code = code;
event.value = value;

if (write(fd, &event, sizeof(event)) != sizeof(event)) {
fprintf(stderr, "Error on send_event");
return -1;
}

return 0;
}

/* S: these would need to provided as configuration so
* that the bridge would know which devices to bridge to input
*/
const char *device_name = "lis3l02dq";
const char *trigger_name_base = "lis3l02dq-dev";
/*
* S: obtain these from combination of configuration file for
* bridge and buffer description in sysfs for the IIO device
*/
const int num_vals = 3;
/* S: not actually using the timestamp as I couldn't be bothered
* to check what format uinput needed
*/
const int scan_ts = 1;
/* S: this is way to long as it means you only get data a couple
* of times a second.
*/
const int buf_len = 128;
/*
* Obviously this would be infinite
*/
const int num_loops = 40;

/* S: Calculate this form sysfs params */
int size_from_scanmode(int num_vals, int timestamp)
{
if (num_vals && timestamp)
return 16;
else if (timestamp)
return 8;
else
return num_vals*2;
}

int main(int argc, char **argv)
{

int ret;
int i, j, k, toread;
FILE *fp_ev;
int fp;

char *trigger_name, *dev_dir_name, *buf_dir_name;
char *data;
size_t read_size;
struct iio_event_data dat;
int dev_num, trig_num;

char *buffer_access, *buffer_event;
const char *iio_dir = "/sys/bus/iio/devices/";
int scan_size;
float gain = 1;

/* New uinput stuff */
int fd;
struct uinput_user_dev device;
memset(&device, 0, sizeof device);
fd=open("/dev/input/uinput", O_WRONLY);
strcpy(device.name, "lis3l02dq accelerometer");

/* No idea what sort of bus I should use... */
device.id.bustype = BUS_VIRTUAL;
device.id.vendor = 1;
device.id.product = 1;
device.id.version = 1;
for (i=0; i < ABS_MAX; i++) {
device.absmax[i] = -1;
device.absmin[i] = -1;
device.absfuzz[i] = -1;
device.absflat[i] = -1;
}
/* S: These can all be derived from sysfs attributes,
* though we need the format spec Manuel suggested the
* other day. Typically IIO never cares about range,
* just format.
*/
device.absmin[ABS_X] = -2048;
device.absmax[ABS_X] = 2047;
device.absfuzz[ABS_X] = 0;
device.absflat[ABS_X] = 0;
device.absmin[ABS_Y] = -2048;
device.absmax[ABS_Y] = 2047;
device.absfuzz[ABS_Y] = 0;
device.absflat[ABS_Y] = 0;
device.absmin[ABS_Z] = -2048;
device.absmax[ABS_Z] = 2047;
device.absfuzz[ABS_Z] = 0;
device.absflat[ABS_Z] = 0;

if (write(fd, &device, sizeof(device)) != sizeof(device)) {
fprintf(stderr, "error setup\n");
return -1;
}

if (ioctl(fd, UI_SET_EVBIT, EV_ABS) < 0) {
fprintf(stderr, "error evbit abs\n");
return -1;
}

for(i = REL_X; i <= REL_Z; i++)
if (ioctl(fd, UI_SET_ABSBIT, i) < 0) {
fprintf(stderr, "error setrelbit %d\n", i);
return -1;
}

if (ioctl(fd, UI_DEV_CREATE) < 0) {
fprintf(stderr, "error create\n");
return -1;
}

/* Find out which iio device is the accelerometer. */
dev_num = find_type_by_name(device_name, "device");
if (dev_num < 0) {
printf("Failed to find the %s\n", device_name);
ret = -ENODEV;
goto error_ret;
}
printf("iio device number being used is %d\n", dev_num);
asprintf(&dev_dir_name, "%sdevice%d", iio_dir, dev_num);

/*
* Build the trigger name.
* In this case we want the lis3l02dq's data ready trigger
* for this lis3l02dq. The naming is lis3l02dq_dev[n], where
* n matches the device number found above.
*/
ret = asprintf(&trigger_name, "%s%d", trigger_name_base, dev_num);
if (ret < 0) {
ret = -ENOMEM;
goto error_free_dev_dir_name;
}

/*
* Find the trigger by name.
* This is techically unecessary here as we only need to
* refer to the trigger by name and that name is already
* known.
*/
trig_num = find_type_by_name(trigger_name, "trigger");
if (trig_num < 0) {
printf("Failed to find the %s\n", trigger_name);
ret = -ENODEV;
goto error_free_triggername;
}
printf("iio trigger number being used is %d\n", trig_num);

/*
* Read in the scale value - in a more generic case, first
* check for accel_scale, then the indivual channel scales
*/
ret = read_sysfs_float("accel_scale", dev_dir_name, &gain);
if (ret)
goto error_free_triggername;;

/*
* Construct the directory name for the associated buffer.
* As we know that the lis3l02dq has only one buffer this may
* be built rather than found.
*/
ret = asprintf(&buf_dir_name, "%sdevice%d:buffer0", iio_dir, dev_num);
if (ret < 0) {
ret = -ENOMEM;
goto error_free_triggername;
}
/* Set the device trigger to be the data rdy trigger found above */
ret = write_sysfs_string_and_verify("trigger/current_trigger",
dev_dir_name,
trigger_name);
if (ret < 0) {
printf("Failed to write current_trigger file\n");
goto error_free_buf_dir_name;
}

/* Setup ring buffer parameters */
ret = write_sysfs_int("length", buf_dir_name, buf_len);
if (ret < 0)
goto error_free_buf_dir_name;

/* Enable the buffer */
ret = write_sysfs_int("enable", buf_dir_name, 1);
if (ret < 0)
goto error_free_buf_dir_name;

data = malloc(size_from_scanmode(num_vals, scan_ts)*buf_len);
if (!data) {
ret = -ENOMEM;
goto error_free_buf_dir_name;
}

ret = asprintf(&buffer_access,
"/dev/device%d:buffer0:access0",
dev_num);
if (ret < 0) {
ret = -ENOMEM;
goto error_free_data;
}

ret = asprintf(&buffer_event, "/dev/device%d:buffer0:event0", dev_num);
if (ret < 0) {
ret = -ENOMEM;
goto error_free_data;
}
/* Attempt to open non blocking the access dev */
fp = open(buffer_access, O_RDONLY | O_NONBLOCK);
if (fp == -1) { /*If it isn't there make the node */
printf("Failed to open %s\n", buffer_access);
ret = -errno;
goto error_free_buffer_event;
}
/* Attempt to open the event access dev (blocking this time) */
fp_ev = fopen(buffer_event, "rb");
if (fp_ev == NULL) {
printf("Failed to open %s\n", buffer_event);
ret = -errno;
goto error_close_buffer_access;
}

/* Wait for events 10 times */
for (j = 0; j < num_loops; j++) {
read_size = fread(&dat, 1, sizeof(struct iio_event_data),
fp_ev);
switch (dat.id) {
case IIO_EVENT_CODE_RING_100_FULL:
toread = buf_len;
break;
case IIO_EVENT_CODE_RING_75_FULL:
toread = buf_len*3/4;
break;
case IIO_EVENT_CODE_RING_50_FULL:
toread = buf_len/2;
break;
default:
printf("Unexpecteded event code\n");
continue;
}
read_size = read(fp,
data,
toread*size_from_scanmode(num_vals, scan_ts));
if (read_size == -EAGAIN) {
printf("nothing available\n");
continue;
}
scan_size = size_from_scanmode(num_vals, scan_ts);
for (i = 0; i < read_size/scan_size; i++) {

/* This is the only bit I changed (rather than
* added) from the original demo code.
*/
__s16 val = *(__s16 *)(&data[i*scan_size + (0)*2]);
send_event(fd, EV_ABS, ABS_X, val);
val = *(__s16 *)(&data[i*scan_size + (1)*2]);
send_event(fd, EV_ABS, ABS_Y, val);
val = *(__s16 *)(&data[i*scan_size + (1)*2]);
send_event(fd, EV_ABS, ABS_Z, val);

/* S: I'm not actually sure what this does? */
send_event(fd, EV_SYN, SYN_REPORT, 0);
}
}

/* Stop the ring buffer */
ret = write_sysfs_int("enable", buf_dir_name, 0);
if (ret < 0)
goto error_close_buffer_event;

/* Disconnect from the trigger - just write a dummy name.*/
write_sysfs_string("trigger/current_trigger",
dev_dir_name, "NULL");

error_close_buffer_event:
fclose(fp_ev);
error_close_buffer_access:
close(fp);
error_free_data:
free(data);
error_free_buffer_access:
free(buffer_access);
error_free_buffer_event:
free(buffer_event);
error_free_buf_dir_name:
free(buf_dir_name);
error_free_triggername:
free(trigger_name);
error_free_dev_dir_name:
free(dev_dir_name);
error_ret:

/* S: didn't clean up properly for the uinput stuff */
close(fd);
return ret;
}


2010-08-31 18:10:26

by Jonathan Cameron

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On 08/31/10 18:24, Mohamed Ikbel Boulabiar wrote:
> IMHO I think sensors no more can be considered as non-input-devices.
> Things changed too much in recent years. Input "sources" have now a
> very different use as before (smartphones, Tablets and handheld
> devices...)
> They all have much inputs that come mostly from sensors.
>
> So the definition of an input device is something that the user can
> interact on it ?
Sadly it is no where near as clean a definition as we would like.
There are too many fuzzy regions. Lots of the devices are used for
both consumer devices and for other forms of high end input. A lot
of consumer devices use general purpose ADCs at a tiny percentage of
their maximum data rates because they are cheap and standard.
> Maybe we should consider input devices to be made from 1 to N sensors
> with some filtering blocks which only expose the useful data.
That's what you get via input (to a certain extent). But a lot of what
people want in applications is derived data and some of the algorithms
to do that are very complex and certainly should not be in the kernel.

Again this may be a case for using uinput to push your derived data back
into kernel space. (Did I mention that I rather like uinput now I've
started playing with it :)
>
> If we think like this an input device can be made from sub parts which
> can be bare sensors. (Many sensors are exposed as
> Human.Interface.Devices which are mainly input devices)
HID is just fine if the aggregation is nicely handled by a separate
processor on the device (which is what is actually happening). It
is large, messy and an enormous number of devices abuse it for things
that aren't input. They have exactly the same issue that Dmitry
is trying to avoid. Just because you can use an interface to handle
your data, doesn't make it the right thing to do!

Hence HID is a very nice illustration in lots of ways ;)

Jonathan

2010-08-31 18:15:55

by Jonathan Cameron

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

<snip>

>
> Anyhow, here is a code dump of a very nasty proof of concept for an IIO
> to input bridge in userspace. If I'd known it would be this easy I'd
> have done this ages ago. I'm away for next couple of days, so a more
> general version won't occur until next week unless someone else picks it
> up.
>
One thing I forgot to mention. IIO uses two different paths for what
become events in input. The main data stream (which is predictable)
comes via the buffer interface handled here. Threshold type events
(and all the weird and wonderful variants) come via a second chrdev
if the device supports them. These are trivial to add to the below
and I'll put them in the generalized version (basically a second fd
and a call to select + a big translation table - which may want to
be configurable...)
> ---
>
> /* IIO to uinput userspace bridge example for specific device.
> *
> * Copyright (c) 2010 Jonathan Cameron
> * Sometime in the past Luke Casson (or at least it is on his website)
> *
> * This program is free software; you can redistribute it and/or modify it
> * under the terms of the GNU General Public License version 2 as published by
> * the Free Software Foundation.
> *
> * This program is to illustrate how an IIO to input bridge would more or less
> * work.
> * Some comments in line to say where there have been short cuts taken. Denote S:
> */
>
> #include <dirent.h>
> #include <fcntl.h>
> #include <stdio.h>
> #include <errno.h>
> #include <sys/stat.h>
> #include <sys/dir.h>
> #include <linux/types.h>
> #include "iio_utils.h"
> #include <linux/input.h>
> #include <linux/uinput.h>
>
> int send_event(int fd, __u16 type, __u16 code, __s32 value)
> {
> struct input_event event;
>
> memset(&event, 0, sizeof(event));
> event.type = type;
> event.code = code;
> event.value = value;
>
> if (write(fd, &event, sizeof(event)) != sizeof(event)) {
> fprintf(stderr, "Error on send_event");
> return -1;
> }
>
> return 0;
> }
>
> /* S: these would need to provided as configuration so
> * that the bridge would know which devices to bridge to input
> */
> const char *device_name = "lis3l02dq";
> const char *trigger_name_base = "lis3l02dq-dev";
> /*
> * S: obtain these from combination of configuration file for
> * bridge and buffer description in sysfs for the IIO device
> */
> const int num_vals = 3;
> /* S: not actually using the timestamp as I couldn't be bothered
> * to check what format uinput needed
> */
> const int scan_ts = 1;
> /* S: this is way to long as it means you only get data a couple
> * of times a second.
> */
> const int buf_len = 128;
> /*
> * Obviously this would be infinite
> */
> const int num_loops = 40;
>
> /* S: Calculate this form sysfs params */
> int size_from_scanmode(int num_vals, int timestamp)
> {
> if (num_vals && timestamp)
> return 16;
> else if (timestamp)
> return 8;
> else
> return num_vals*2;
> }
>
> int main(int argc, char **argv)
> {
>
> int ret;
> int i, j, k, toread;
> FILE *fp_ev;
> int fp;
>
> char *trigger_name, *dev_dir_name, *buf_dir_name;
> char *data;
> size_t read_size;
> struct iio_event_data dat;
> int dev_num, trig_num;
>
> char *buffer_access, *buffer_event;
> const char *iio_dir = "/sys/bus/iio/devices/";
> int scan_size;
> float gain = 1;
>
> /* New uinput stuff */
> int fd;
> struct uinput_user_dev device;
> memset(&device, 0, sizeof device);
> fd=open("/dev/input/uinput", O_WRONLY);
> strcpy(device.name, "lis3l02dq accelerometer");
>
> /* No idea what sort of bus I should use... */
> device.id.bustype = BUS_VIRTUAL;
> device.id.vendor = 1;
> device.id.product = 1;
> device.id.version = 1;
> for (i=0; i < ABS_MAX; i++) {
> device.absmax[i] = -1;
> device.absmin[i] = -1;
> device.absfuzz[i] = -1;
> device.absflat[i] = -1;
> }
> /* S: These can all be derived from sysfs attributes,
> * though we need the format spec Manuel suggested the
> * other day. Typically IIO never cares about range,
> * just format.
> */
> device.absmin[ABS_X] = -2048;
> device.absmax[ABS_X] = 2047;
> device.absfuzz[ABS_X] = 0;
> device.absflat[ABS_X] = 0;
> device.absmin[ABS_Y] = -2048;
> device.absmax[ABS_Y] = 2047;
> device.absfuzz[ABS_Y] = 0;
> device.absflat[ABS_Y] = 0;
> device.absmin[ABS_Z] = -2048;
> device.absmax[ABS_Z] = 2047;
> device.absfuzz[ABS_Z] = 0;
> device.absflat[ABS_Z] = 0;
>
> if (write(fd, &device, sizeof(device)) != sizeof(device)) {
> fprintf(stderr, "error setup\n");
> return -1;
> }
>
> if (ioctl(fd, UI_SET_EVBIT, EV_ABS) < 0) {
> fprintf(stderr, "error evbit abs\n");
> return -1;
> }
>
> for(i = REL_X; i <= REL_Z; i++)
> if (ioctl(fd, UI_SET_ABSBIT, i) < 0) {
> fprintf(stderr, "error setrelbit %d\n", i);
> return -1;
> }
>
> if (ioctl(fd, UI_DEV_CREATE) < 0) {
> fprintf(stderr, "error create\n");
> return -1;
> }
>
> /* Find out which iio device is the accelerometer. */
> dev_num = find_type_by_name(device_name, "device");
> if (dev_num < 0) {
> printf("Failed to find the %s\n", device_name);
> ret = -ENODEV;
> goto error_ret;
> }
> printf("iio device number being used is %d\n", dev_num);
> asprintf(&dev_dir_name, "%sdevice%d", iio_dir, dev_num);
>
> /*
> * Build the trigger name.
> * In this case we want the lis3l02dq's data ready trigger
> * for this lis3l02dq. The naming is lis3l02dq_dev[n], where
> * n matches the device number found above.
> */
> ret = asprintf(&trigger_name, "%s%d", trigger_name_base, dev_num);
> if (ret < 0) {
> ret = -ENOMEM;
> goto error_free_dev_dir_name;
> }
>
> /*
> * Find the trigger by name.
> * This is techically unecessary here as we only need to
> * refer to the trigger by name and that name is already
> * known.
> */
> trig_num = find_type_by_name(trigger_name, "trigger");
> if (trig_num < 0) {
> printf("Failed to find the %s\n", trigger_name);
> ret = -ENODEV;
> goto error_free_triggername;
> }
> printf("iio trigger number being used is %d\n", trig_num);
>
> /*
> * Read in the scale value - in a more generic case, first
> * check for accel_scale, then the indivual channel scales
> */
> ret = read_sysfs_float("accel_scale", dev_dir_name, &gain);
> if (ret)
> goto error_free_triggername;;
>
> /*
> * Construct the directory name for the associated buffer.
> * As we know that the lis3l02dq has only one buffer this may
> * be built rather than found.
> */
> ret = asprintf(&buf_dir_name, "%sdevice%d:buffer0", iio_dir, dev_num);
> if (ret < 0) {
> ret = -ENOMEM;
> goto error_free_triggername;
> }
> /* Set the device trigger to be the data rdy trigger found above */
> ret = write_sysfs_string_and_verify("trigger/current_trigger",
> dev_dir_name,
> trigger_name);
> if (ret < 0) {
> printf("Failed to write current_trigger file\n");
> goto error_free_buf_dir_name;
> }
>
> /* Setup ring buffer parameters */
> ret = write_sysfs_int("length", buf_dir_name, buf_len);
> if (ret < 0)
> goto error_free_buf_dir_name;
>
> /* Enable the buffer */
> ret = write_sysfs_int("enable", buf_dir_name, 1);
> if (ret < 0)
> goto error_free_buf_dir_name;
>
> data = malloc(size_from_scanmode(num_vals, scan_ts)*buf_len);
> if (!data) {
> ret = -ENOMEM;
> goto error_free_buf_dir_name;
> }
>
> ret = asprintf(&buffer_access,
> "/dev/device%d:buffer0:access0",
> dev_num);
> if (ret < 0) {
> ret = -ENOMEM;
> goto error_free_data;
> }
>
> ret = asprintf(&buffer_event, "/dev/device%d:buffer0:event0", dev_num);
> if (ret < 0) {
> ret = -ENOMEM;
> goto error_free_data;
> }
> /* Attempt to open non blocking the access dev */
> fp = open(buffer_access, O_RDONLY | O_NONBLOCK);
> if (fp == -1) { /*If it isn't there make the node */
> printf("Failed to open %s\n", buffer_access);
> ret = -errno;
> goto error_free_buffer_event;
> }
> /* Attempt to open the event access dev (blocking this time) */
> fp_ev = fopen(buffer_event, "rb");
> if (fp_ev == NULL) {
> printf("Failed to open %s\n", buffer_event);
> ret = -errno;
> goto error_close_buffer_access;
> }
>
> /* Wait for events 10 times */
> for (j = 0; j < num_loops; j++) {
> read_size = fread(&dat, 1, sizeof(struct iio_event_data),
> fp_ev);
> switch (dat.id) {
> case IIO_EVENT_CODE_RING_100_FULL:
> toread = buf_len;
> break;
> case IIO_EVENT_CODE_RING_75_FULL:
> toread = buf_len*3/4;
> break;
> case IIO_EVENT_CODE_RING_50_FULL:
> toread = buf_len/2;
> break;
> default:
> printf("Unexpecteded event code\n");
> continue;
> }
> read_size = read(fp,
> data,
> toread*size_from_scanmode(num_vals, scan_ts));
> if (read_size == -EAGAIN) {
> printf("nothing available\n");
> continue;
> }
> scan_size = size_from_scanmode(num_vals, scan_ts);
> for (i = 0; i < read_size/scan_size; i++) {
>
> /* This is the only bit I changed (rather than
> * added) from the original demo code.
> */
> __s16 val = *(__s16 *)(&data[i*scan_size + (0)*2]);
> send_event(fd, EV_ABS, ABS_X, val);
> val = *(__s16 *)(&data[i*scan_size + (1)*2]);
> send_event(fd, EV_ABS, ABS_Y, val);
> val = *(__s16 *)(&data[i*scan_size + (1)*2]);
> send_event(fd, EV_ABS, ABS_Z, val);
>
> /* S: I'm not actually sure what this does? */
> send_event(fd, EV_SYN, SYN_REPORT, 0);
> }
> }
>
> /* Stop the ring buffer */
> ret = write_sysfs_int("enable", buf_dir_name, 0);
> if (ret < 0)
> goto error_close_buffer_event;
>
> /* Disconnect from the trigger - just write a dummy name.*/
> write_sysfs_string("trigger/current_trigger",
> dev_dir_name, "NULL");
>
> error_close_buffer_event:
> fclose(fp_ev);
> error_close_buffer_access:
> close(fp);
> error_free_data:
> free(data);
> error_free_buffer_access:
> free(buffer_access);
> error_free_buffer_event:
> free(buffer_event);
> error_free_buf_dir_name:
> free(buf_dir_name);
> error_free_triggername:
> free(trigger_name);
> error_free_dev_dir_name:
> free(dev_dir_name);
> error_ret:
>
> /* S: didn't clean up properly for the uinput stuff */
> close(fd);
> return ret;
> }
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-iio" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>

2010-08-31 18:25:41

by Daniel Barkalow

[permalink] [raw]
Subject: Re: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

On Mon, 30 Aug 2010, Linus Torvalds wrote:

> On Monday, August 30, 2010, Dmitry Torokhov <[email protected]> wrote:
> >
> > But do you believe that input should be the "primary residence" for the
> > devices when they are only _sometimes_ used as input devices? Or it
> > would make sense to employ a converter from XXX to input (either purely
> > in-kernel or userspace over uinput)?
>
> Umm... You've brought that up before as an objection, but what _is_
> that other model that you would convert from? IOW what *is* that XXX
> that you talk about?
>
> So I think accelerometers etc should be seen as input devices for the
> simple reason that
>
> (a) They really *are* input devices in all the most common cases. If
> you have a phone with an accelerometer, it really is used as an input
> device quite like a joystick.

At least some of the time, an accelerometer is more like part of a
joystick. You can have phones with accelerometers at both ends, where the
input events produced depend on the combination of the readings, with each
of the individual accelerometers being a device that is also available by
itself. You can have phones which can be used in various orientations,
where the logical "left" direction depends on the orientation. With a
joystick, the device that the driver is for has the assignment of motions
to inputs; for an accelerometer in a phone, the assignment is outside the
scope of the chip, and it would make sense to have a second driver that
takes non-chip-specific accelerometer output (per-accelerometer, per-axis)
and maps it to input events based on how the user holds the box the chip
is in.

It's like having a whole bunch of button devices; you want them to be
exposed as a keyboard, and the sensor hardware reports press and release,
but something else is needed to know which hardware button is which key.
Similarly, you want to know about how the user is moving the device, and
you may need to process various sorts of raw data to produce anything
meaningful to applications.

-Daniel
*This .sig left intentionally blank*

2010-08-31 22:18:15

by Chris Hudson

[permalink] [raw]
Subject: RE: Sensors and the input layer (was Re: [RFC] [PATCH V2 1/2] input: CMA3000 Accelerometer driver)

> OK, so let's say we start moving some of the devices into input. Which
> ones we consider suitable for input? I guess some 3-digit
> accelerometers, what else? Also, what new event types would we need?
> Let's take GPS - I do not think that ABS_X and ABS_Y are the best events
> to be used for such devices: I am trying to allow applications being
> ignorant of what exact device they are talking to and rather concentrate
> on device capabilities (list of events supported). GPS is sufficiently
> different from a tablet/touchscreen; while some might want to use both
> as inputs to a game most applications would want to know which one
> which.

> Also, GPS, liek ALS, would probably be polling, no?

> --
> Dmitry
> --

Hello Dmitry,

Current-generation accelerometers typically have some functions built-in that provide an interrupt signal under certain conditions. For instance, orientation and motion detection can be calculated at the hardware level to reduce the need for constant polling and software to handle the algorithms. Some devices have built-in tap detection, which can require data at several hundred samples per second, something that is certainly not efficiently supported in software.

I have been using ABS_MISC to pack 4 bytes of interrupt status information, but it might be a good idea to consider having a separate field for:
Orientation (screen rotation)
Motion Detection (or sleep detection)
Tap Detection

Regards,
Chris Hudson