2024-05-23 23:25:49

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v1] tools api io: Move filling the io buffer to its own function

On Sun, May 19, 2024 at 11:17 AM Ian Rogers <[email protected]> wrote:
>
> In general a read fills 4kb so filling the buffer is a 1 in 4096
> operation, move it out of the io__get_char function to avoid some
> checking overhead and to better hint the function is good to inline.
>
> For perf's IO intensive internal (non-rigorous) benchmarks there's a
> near 8% improvement to kallsyms-parsing with a default build.

Oh, is it just from removing the io->eof check? Otherwise I don't
see any difference.

Thanks,
Namhyung

>
> Before:
> ```
> $ perf bench internals all
> Computing performance of single threaded perf event synthesis by
> synthesizing events on the perf process itself:
> Average synthesis took: 146.322 usec (+- 0.305 usec)
> Average num. events: 61.000 (+- 0.000)
> Average time per event 2.399 usec
> Average data synthesis took: 145.056 usec (+- 0.155 usec)
> Average num. events: 329.000 (+- 0.000)
> Average time per event 0.441 usec
>
> Average kallsyms__parse took: 162.313 ms (+- 0.599 ms)
> ...
> Computing performance of sysfs PMU event scan for 100 times
> Average core PMU scanning took: 53.720 usec (+- 7.823 usec)
> Average PMU scanning took: 375.145 usec (+- 23.974 usec)
> ```
> After:
> ```
> $ perf bench internals all
> Computing performance of single threaded perf event synthesis by
> synthesizing events on the perf process itself:
> Average synthesis took: 127.829 usec (+- 0.079 usec)
> Average num. events: 61.000 (+- 0.000)
> Average time per event 2.096 usec
> Average data synthesis took: 133.652 usec (+- 0.101 usec)
> Average num. events: 327.000 (+- 0.000)
> Average time per event 0.409 usec
>
> Average kallsyms__parse took: 150.415 ms (+- 0.313 ms)
> ...
> Computing performance of sysfs PMU event scan for 100 times
> Average core PMU scanning took: 47.790 usec (+- 1.178 usec)
> Average PMU scanning took: 376.945 usec (+- 23.683 usec)
> ```
>
> Signed-off-by: Ian Rogers <[email protected]>
> ---
> tools/lib/api/io.h | 69 +++++++++++++++++++++++++---------------------
> 1 file changed, 38 insertions(+), 31 deletions(-)
>
> diff --git a/tools/lib/api/io.h b/tools/lib/api/io.h
> index 84adf8102018..d3eb04d1bc89 100644
> --- a/tools/lib/api/io.h
> +++ b/tools/lib/api/io.h
> @@ -43,48 +43,55 @@ static inline void io__init(struct io *io, int fd,
> io->eof = false;
> }
>
> -/* Reads one character from the "io" file with similar semantics to fgetc. */
> -static inline int io__get_char(struct io *io)
> +/* Read from fd filling the buffer. Called when io->data == io->end. */
> +static inline int io__fill_buffer(struct io *io)
> {
> - char *ptr = io->data;
> + ssize_t n;
>
> if (io->eof)
> return -1;
>
> - if (ptr == io->end) {
> - ssize_t n;
> -
> - if (io->timeout_ms != 0) {
> - struct pollfd pfds[] = {
> - {
> - .fd = io->fd,
> - .events = POLLIN,
> - },
> - };
> -
> - n = poll(pfds, 1, io->timeout_ms);
> - if (n == 0)
> - errno = ETIMEDOUT;
> - if (n > 0 && !(pfds[0].revents & POLLIN)) {
> - errno = EIO;
> - n = -1;
> - }
> - if (n <= 0) {
> - io->eof = true;
> - return -1;
> - }
> + if (io->timeout_ms != 0) {
> + struct pollfd pfds[] = {
> + {
> + .fd = io->fd,
> + .events = POLLIN,
> + },
> + };
> +
> + n = poll(pfds, 1, io->timeout_ms);
> + if (n == 0)
> + errno = ETIMEDOUT;
> + if (n > 0 && !(pfds[0].revents & POLLIN)) {
> + errno = EIO;
> + n = -1;
> }
> - n = read(io->fd, io->buf, io->buf_len);
> -
> if (n <= 0) {
> io->eof = true;
> return -1;
> }
> - ptr = &io->buf[0];
> - io->end = &io->buf[n];
> }
> - io->data = ptr + 1;
> - return *ptr;
> + n = read(io->fd, io->buf, io->buf_len);
> +
> + if (n <= 0) {
> + io->eof = true;
> + return -1;
> + }
> + io->data = &io->buf[0];
> + io->end = &io->buf[n];
> + return 0;
> +}
> +
> +/* Reads one character from the "io" file with similar semantics to fgetc. */
> +static inline int io__get_char(struct io *io)
> +{
> + if (io->data == io->end) {
> + int ret = io__fill_buffer(io);
> +
> + if (ret)
> + return ret;
> + }
> + return *io->data++;
> }
>
> /* Read a hexadecimal value with no 0x prefix into the out argument hex. If the
> --
> 2.45.0.rc1.225.g2a3ae87e7f-goog
>


2024-05-24 04:47:37

by Ian Rogers

[permalink] [raw]
Subject: Re: [PATCH v1] tools api io: Move filling the io buffer to its own function

On Thu, May 23, 2024 at 4:25 PM Namhyung Kim <[email protected]> wrote:
>
> On Sun, May 19, 2024 at 11:17 AM Ian Rogers <[email protected]> wrote:
> >
> > In general a read fills 4kb so filling the buffer is a 1 in 4096
> > operation, move it out of the io__get_char function to avoid some
> > checking overhead and to better hint the function is good to inline.
> >
> > For perf's IO intensive internal (non-rigorous) benchmarks there's a
> > near 8% improvement to kallsyms-parsing with a default build.
>
> Oh, is it just from removing the io->eof check? Otherwise I don't
> see any difference.

I was hoping that by moving the code out-of-line then the hot part of
the function could be inlined into things like reading the hex
character. I didn't see that, presumably there are too many callers
and so that made the inliner think sharing would be best even though
the hot code is a compare, pointer dereference and an increment. I
tried forcing inlining but it didn't seem to win over just having the
code out-of-line. The eof check should be very well predicted. The
out-of-line code was branched over forward, which should be 1
mispredict but again not a huge deal. I didn't do a more thorough
analysis as I still prefer to have the cold code out-of-line.

Thanks,
Ian

> Thanks,
> Namhyung
>
> >
> > Before:
> > ```
> > $ perf bench internals all
> > Computing performance of single threaded perf event synthesis by
> > synthesizing events on the perf process itself:
> > Average synthesis took: 146.322 usec (+- 0.305 usec)
> > Average num. events: 61.000 (+- 0.000)
> > Average time per event 2.399 usec
> > Average data synthesis took: 145.056 usec (+- 0.155 usec)
> > Average num. events: 329.000 (+- 0.000)
> > Average time per event 0.441 usec
> >
> > Average kallsyms__parse took: 162.313 ms (+- 0.599 ms)
> > ...
> > Computing performance of sysfs PMU event scan for 100 times
> > Average core PMU scanning took: 53.720 usec (+- 7.823 usec)
> > Average PMU scanning took: 375.145 usec (+- 23.974 usec)
> > ```
> > After:
> > ```
> > $ perf bench internals all
> > Computing performance of single threaded perf event synthesis by
> > synthesizing events on the perf process itself:
> > Average synthesis took: 127.829 usec (+- 0.079 usec)
> > Average num. events: 61.000 (+- 0.000)
> > Average time per event 2.096 usec
> > Average data synthesis took: 133.652 usec (+- 0.101 usec)
> > Average num. events: 327.000 (+- 0.000)
> > Average time per event 0.409 usec
> >
> > Average kallsyms__parse took: 150.415 ms (+- 0.313 ms)
> > ...
> > Computing performance of sysfs PMU event scan for 100 times
> > Average core PMU scanning took: 47.790 usec (+- 1.178 usec)
> > Average PMU scanning took: 376.945 usec (+- 23.683 usec)
> > ```
> >
> > Signed-off-by: Ian Rogers <[email protected]>
> > ---
> > tools/lib/api/io.h | 69 +++++++++++++++++++++++++---------------------
> > 1 file changed, 38 insertions(+), 31 deletions(-)
> >
> > diff --git a/tools/lib/api/io.h b/tools/lib/api/io.h
> > index 84adf8102018..d3eb04d1bc89 100644
> > --- a/tools/lib/api/io.h
> > +++ b/tools/lib/api/io.h
> > @@ -43,48 +43,55 @@ static inline void io__init(struct io *io, int fd,
> > io->eof = false;
> > }
> >
> > -/* Reads one character from the "io" file with similar semantics to fgetc. */
> > -static inline int io__get_char(struct io *io)
> > +/* Read from fd filling the buffer. Called when io->data == io->end. */
> > +static inline int io__fill_buffer(struct io *io)
> > {
> > - char *ptr = io->data;
> > + ssize_t n;
> >
> > if (io->eof)
> > return -1;
> >
> > - if (ptr == io->end) {
> > - ssize_t n;
> > -
> > - if (io->timeout_ms != 0) {
> > - struct pollfd pfds[] = {
> > - {
> > - .fd = io->fd,
> > - .events = POLLIN,
> > - },
> > - };
> > -
> > - n = poll(pfds, 1, io->timeout_ms);
> > - if (n == 0)
> > - errno = ETIMEDOUT;
> > - if (n > 0 && !(pfds[0].revents & POLLIN)) {
> > - errno = EIO;
> > - n = -1;
> > - }
> > - if (n <= 0) {
> > - io->eof = true;
> > - return -1;
> > - }
> > + if (io->timeout_ms != 0) {
> > + struct pollfd pfds[] = {
> > + {
> > + .fd = io->fd,
> > + .events = POLLIN,
> > + },
> > + };
> > +
> > + n = poll(pfds, 1, io->timeout_ms);
> > + if (n == 0)
> > + errno = ETIMEDOUT;
> > + if (n > 0 && !(pfds[0].revents & POLLIN)) {
> > + errno = EIO;
> > + n = -1;
> > }
> > - n = read(io->fd, io->buf, io->buf_len);
> > -
> > if (n <= 0) {
> > io->eof = true;
> > return -1;
> > }
> > - ptr = &io->buf[0];
> > - io->end = &io->buf[n];
> > }
> > - io->data = ptr + 1;
> > - return *ptr;
> > + n = read(io->fd, io->buf, io->buf_len);
> > +
> > + if (n <= 0) {
> > + io->eof = true;
> > + return -1;
> > + }
> > + io->data = &io->buf[0];
> > + io->end = &io->buf[n];
> > + return 0;
> > +}
> > +
> > +/* Reads one character from the "io" file with similar semantics to fgetc. */
> > +static inline int io__get_char(struct io *io)
> > +{
> > + if (io->data == io->end) {
> > + int ret = io__fill_buffer(io);
> > +
> > + if (ret)
> > + return ret;
> > + }
> > + return *io->data++;
> > }
> >
> > /* Read a hexadecimal value with no 0x prefix into the out argument hex. If the
> > --
> > 2.45.0.rc1.225.g2a3ae87e7f-goog
> >

2024-05-30 16:51:51

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v1] tools api io: Move filling the io buffer to its own function

On Thu, May 23, 2024 at 9:47 PM Ian Rogers <[email protected]> wrote:
>
> On Thu, May 23, 2024 at 4:25 PM Namhyung Kim <[email protected]> wrote:
> >
> > On Sun, May 19, 2024 at 11:17 AM Ian Rogers <[email protected]> wrote:
> > >
> > > In general a read fills 4kb so filling the buffer is a 1 in 4096
> > > operation, move it out of the io__get_char function to avoid some
> > > checking overhead and to better hint the function is good to inline.
> > >
> > > For perf's IO intensive internal (non-rigorous) benchmarks there's a
> > > near 8% improvement to kallsyms-parsing with a default build.
> >
> > Oh, is it just from removing the io->eof check? Otherwise I don't
> > see any difference.
>
> I was hoping that by moving the code out-of-line then the hot part of
> the function could be inlined into things like reading the hex
> character. I didn't see that, presumably there are too many callers
> and so that made the inliner think sharing would be best even though
> the hot code is a compare, pointer dereference and an increment. I
> tried forcing inlining but it didn't seem to win over just having the
> code out-of-line. The eof check should be very well predicted. The
> out-of-line code was branched over forward, which should be 1
> mispredict but again not a huge deal. I didn't do a more thorough
> analysis as I still prefer to have the cold code out-of-line.

Ok, I don't see much difference with this change. But the change itself
looks fine.

Thanks,
Namhyung


Before:

# Running internals/synthesize benchmark...
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:
Average synthesis took: 237.274 usec (+- 0.066 usec)
Average num. events: 24.000 (+- 0.000)
Average time per event 9.886 usec
Average data synthesis took: 241.126 usec (+- 0.087 usec)
Average num. events: 128.000 (+- 0.000)
Average time per event 1.884 usec

# Running internals/kallsyms-parse benchmark...
Average kallsyms__parse took: 184.374 ms (+- 0.022 ms)

# Running internals/inject-build-id benchmark...
Average build-id injection took: 20.096 msec (+- 0.115 msec)
Average time per event: 1.970 usec (+- 0.011 usec)
Average memory usage: 11574 KB (+- 29 KB)
Average build-id-all injection took: 13.477 msec (+- 0.100 msec)
Average time per event: 1.321 usec (+- 0.010 usec)
Average memory usage: 11160 KB (+- 0 KB)

# Running internals/evlist-open-close benchmark...
Number of cpus: 64
Number of threads: 1
Number of events: 1 (64 fds)
Number of iterations: 100
evlist__open: Permission denied

# Running internals/pmu-scan benchmark...
Computing performance of sysfs PMU event scan for 100 times
Average core PMU scanning took: 135.880 usec (+- 0.249 usec)
Average PMU scanning took: 816.745 usec (+- 48.293 usec)


After:

# Running internals/synthesize benchmark...
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:
Average synthesis took: 235.711 usec (+- 0.067 usec)
Average num. events: 24.000 (+- 0.000)
Average time per event 9.821 usec
Average data synthesis took: 240.992 usec (+- 0.058 usec)
Average num. events: 128.000 (+- 0.000)
Average time per event 1.883 usec

# Running internals/kallsyms-parse benchmark...
Average kallsyms__parse took: 179.664 ms (+- 0.043 ms)

# Running internals/inject-build-id benchmark...
Average build-id injection took: 19.901 msec (+- 0.117 msec)
Average time per event: 1.951 usec (+- 0.011 usec)
Average memory usage: 12163 KB (+- 10 KB)
Average build-id-all injection took: 13.627 msec (+- 0.086 msec)
Average time per event: 1.336 usec (+- 0.008 usec)
Average memory usage: 11160 KB (+- 0 KB)

# Running internals/evlist-open-close benchmark...
Number of cpus: 64
Number of threads: 1
Number of events: 1 (64 fds)
Number of iterations: 100
evlist__open: Permission denied

# Running internals/pmu-scan benchmark...
Computing performance of sysfs PMU event scan for 100 times
Average core PMU scanning took: 136.540 usec (+- 0.294 usec)
Average PMU scanning took: 819.415 usec (+- 48.437 usec)

2024-05-30 17:03:52

by Ian Rogers

[permalink] [raw]
Subject: Re: [PATCH v1] tools api io: Move filling the io buffer to its own function

On Thu, May 30, 2024 at 9:44 AM Namhyung Kim <[email protected]> wrote:
>
> On Thu, May 23, 2024 at 9:47 PM Ian Rogers <[email protected]> wrote:
> >
> > On Thu, May 23, 2024 at 4:25 PM Namhyung Kim <[email protected]> wrote:
> > >
> > > On Sun, May 19, 2024 at 11:17 AM Ian Rogers <[email protected]> wrote:
> > > >
> > > > In general a read fills 4kb so filling the buffer is a 1 in 4096
> > > > operation, move it out of the io__get_char function to avoid some
> > > > checking overhead and to better hint the function is good to inline.
> > > >
> > > > For perf's IO intensive internal (non-rigorous) benchmarks there's a
> > > > near 8% improvement to kallsyms-parsing with a default build.
> > >
> > > Oh, is it just from removing the io->eof check? Otherwise I don't
> > > see any difference.
> >
> > I was hoping that by moving the code out-of-line then the hot part of
> > the function could be inlined into things like reading the hex
> > character. I didn't see that, presumably there are too many callers
> > and so that made the inliner think sharing would be best even though
> > the hot code is a compare, pointer dereference and an increment. I
> > tried forcing inlining but it didn't seem to win over just having the
> > code out-of-line. The eof check should be very well predicted. The
> > out-of-line code was branched over forward, which should be 1
> > mispredict but again not a huge deal. I didn't do a more thorough
> > analysis as I still prefer to have the cold code out-of-line.
>
> Ok, I don't see much difference with this change. But the change itself
> looks fine.
>
> Thanks,
> Namhyung
>
>
> Before:
>
> # Running internals/synthesize benchmark...
> Computing performance of single threaded perf event synthesis by
> synthesizing events on the perf process itself:
> Average synthesis took: 237.274 usec (+- 0.066 usec)
> Average num. events: 24.000 (+- 0.000)
> Average time per event 9.886 usec
> Average data synthesis took: 241.126 usec (+- 0.087 usec)
> Average num. events: 128.000 (+- 0.000)
> Average time per event 1.884 usec
>
> # Running internals/kallsyms-parse benchmark...
> Average kallsyms__parse took: 184.374 ms (+- 0.022 ms)
>
> # Running internals/inject-build-id benchmark...
> Average build-id injection took: 20.096 msec (+- 0.115 msec)
> Average time per event: 1.970 usec (+- 0.011 usec)
> Average memory usage: 11574 KB (+- 29 KB)
> Average build-id-all injection took: 13.477 msec (+- 0.100 msec)
> Average time per event: 1.321 usec (+- 0.010 usec)
> Average memory usage: 11160 KB (+- 0 KB)
>
> # Running internals/evlist-open-close benchmark...
> Number of cpus: 64
> Number of threads: 1
> Number of events: 1 (64 fds)
> Number of iterations: 100
> evlist__open: Permission denied
>
> # Running internals/pmu-scan benchmark...
> Computing performance of sysfs PMU event scan for 100 times
> Average core PMU scanning took: 135.880 usec (+- 0.249 usec)
> Average PMU scanning took: 816.745 usec (+- 48.293 usec)
>
>
> After:
>
> # Running internals/synthesize benchmark...
> Computing performance of single threaded perf event synthesis by
> synthesizing events on the perf process itself:
> Average synthesis took: 235.711 usec (+- 0.067 usec)
> Average num. events: 24.000 (+- 0.000)
> Average time per event 9.821 usec
> Average data synthesis took: 240.992 usec (+- 0.058 usec)
> Average num. events: 128.000 (+- 0.000)
> Average time per event 1.883 usec
>
> # Running internals/kallsyms-parse benchmark...
> Average kallsyms__parse took: 179.664 ms (+- 0.043 ms)

So this is still 2%. I was building without options like DEBUG=1
enabled, so perhaps that'd explain the difference. Anyway, if you're
more comfortable with a commit message saying a 2% performance win I
don't mind it being updated or I can upload a v2. It's likely this is
being over-thought given the change :-)

Thanks,
Ian

> # Running internals/inject-build-id benchmark...
> Average build-id injection took: 19.901 msec (+- 0.117 msec)
> Average time per event: 1.951 usec (+- 0.011 usec)
> Average memory usage: 12163 KB (+- 10 KB)
> Average build-id-all injection took: 13.627 msec (+- 0.086 msec)
> Average time per event: 1.336 usec (+- 0.008 usec)
> Average memory usage: 11160 KB (+- 0 KB)
>
> # Running internals/evlist-open-close benchmark...
> Number of cpus: 64
> Number of threads: 1
> Number of events: 1 (64 fds)
> Number of iterations: 100
> evlist__open: Permission denied
>
> # Running internals/pmu-scan benchmark...
> Computing performance of sysfs PMU event scan for 100 times
> Average core PMU scanning took: 136.540 usec (+- 0.294 usec)
> Average PMU scanning took: 819.415 usec (+- 48.437 usec)

2024-05-30 17:09:29

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v1] tools api io: Move filling the io buffer to its own function

On Thu, May 30, 2024 at 10:02 AM Ian Rogers <[email protected]> wrote:
>
> On Thu, May 30, 2024 at 9:44 AM Namhyung Kim <[email protected]> wrote:
> >
> > On Thu, May 23, 2024 at 9:47 PM Ian Rogers <[email protected]> wrote:
> > >
> > > On Thu, May 23, 2024 at 4:25 PM Namhyung Kim <namhyung@kernelorg> wrote:
> > > >
> > > > On Sun, May 19, 2024 at 11:17 AM Ian Rogers <irogers@googlecom> wrote:
> > > > >
> > > > > In general a read fills 4kb so filling the buffer is a 1 in 4096
> > > > > operation, move it out of the io__get_char function to avoid some
> > > > > checking overhead and to better hint the function is good to inline.
> > > > >
> > > > > For perf's IO intensive internal (non-rigorous) benchmarks there's a
> > > > > near 8% improvement to kallsyms-parsing with a default build.
> > > >
> > > > Oh, is it just from removing the io->eof check? Otherwise I don't
> > > > see any difference.
> > >
> > > I was hoping that by moving the code out-of-line then the hot part of
> > > the function could be inlined into things like reading the hex
> > > character. I didn't see that, presumably there are too many callers
> > > and so that made the inliner think sharing would be best even though
> > > the hot code is a compare, pointer dereference and an increment. I
> > > tried forcing inlining but it didn't seem to win over just having the
> > > code out-of-line. The eof check should be very well predicted. The
> > > out-of-line code was branched over forward, which should be 1
> > > mispredict but again not a huge deal. I didn't do a more thorough
> > > analysis as I still prefer to have the cold code out-of-line.
> >
> > Ok, I don't see much difference with this change. But the change itself
> > looks fine.
> >
> > Thanks,
> > Namhyung
> >
> >
> > Before:
> >
> > # Running internals/synthesize benchmark...
> > Computing performance of single threaded perf event synthesis by
> > synthesizing events on the perf process itself:
> > Average synthesis took: 237.274 usec (+- 0.066 usec)
> > Average num. events: 24.000 (+- 0.000)
> > Average time per event 9.886 usec
> > Average data synthesis took: 241.126 usec (+- 0.087 usec)
> > Average num. events: 128.000 (+- 0.000)
> > Average time per event 1.884 usec
> >
> > # Running internals/kallsyms-parse benchmark...
> > Average kallsyms__parse took: 184.374 ms (+- 0.022 ms)
> >
> > # Running internals/inject-build-id benchmark...
> > Average build-id injection took: 20.096 msec (+- 0.115 msec)
> > Average time per event: 1.970 usec (+- 0.011 usec)
> > Average memory usage: 11574 KB (+- 29 KB)
> > Average build-id-all injection took: 13.477 msec (+- 0.100 msec)
> > Average time per event: 1.321 usec (+- 0.010 usec)
> > Average memory usage: 11160 KB (+- 0 KB)
> >
> > # Running internals/evlist-open-close benchmark...
> > Number of cpus: 64
> > Number of threads: 1
> > Number of events: 1 (64 fds)
> > Number of iterations: 100
> > evlist__open: Permission denied
> >
> > # Running internals/pmu-scan benchmark...
> > Computing performance of sysfs PMU event scan for 100 times
> > Average core PMU scanning took: 135.880 usec (+- 0.249 usec)
> > Average PMU scanning took: 816.745 usec (+- 48.293 usec)
> >
> >
> > After:
> >
> > # Running internals/synthesize benchmark...
> > Computing performance of single threaded perf event synthesis by
> > synthesizing events on the perf process itself:
> > Average synthesis took: 235.711 usec (+- 0.067 usec)
> > Average num. events: 24.000 (+- 0.000)
> > Average time per event 9.821 usec
> > Average data synthesis took: 240.992 usec (+- 0.058 usec)
> > Average num. events: 128.000 (+- 0.000)
> > Average time per event 1.883 usec
> >
> > # Running internals/kallsyms-parse benchmark...
> > Average kallsyms__parse took: 179.664 ms (+- 0.043 ms)
>
> So this is still 2%. I was building without options like DEBUG=1
> enabled, so perhaps that'd explain the difference. Anyway, if you're
> more comfortable with a commit message saying a 2% performance win I
> don't mind it being updated or I can upload a v2. It's likely this is
> being over-thought given the change :-)

Nevermind, I think it's fine and I can touch up the message. :)

Thanks,
Namhyung