开发者

How to extract first and second columns from a file that contains tabular data [duplicate]

This question already has answers here: How to extract the first column from a tsv file? (2 answers) Closed 1 year ago.

Consider the following file, holding network client data in a tabular format:

                <H2>Welcome to CCcam 2.1.3 server </H2>
            Connected clients: 75
            +------------+---------------+------------+------------+-----------+-----+--------+----------------------------------------------+----------+
            | Username   | Host          | Connected  | Idle time  | ECM       | EMM | Version| Last used share                              | Ecm time |
            +------------+---------------+------------+------------+-----------+-----+--------+----------------------------------------------+----------+
    |user4       |1x3.xxx.126.xx |00d 22:53:52|00d 22:53:52|0 (0)      |0 (0)|2.1.3   |                                              |          |
    |useral      |7x.xx.xxx.1xx  |00d 22:53:52|00d 22:53:52|0 (0)      |0 (0)|2.0.11  |                                              |          |
    |userxxxx    |x7.xx5.xx.1x1  |00d 22:53:52|00d 22:53:52|0 (0)      |0 (0)|2.0.11  |                                              |          |
    |someuse     |9x.8x.xx0.xx4  |00d 22:53:52|00d 00:00:15|8248 (8245)|0 (0)|2.1.4   |UPC 1W - Zone Reality Europe [CW2] (ok)       |  0.3060s |
    |nameuse     |x8.xx3.6x.7x   |00d 22:53:51|00d 22:53:51|0 (0)      |0 (0)|2.2.1   |                                              |          |
    |blabl       |xx.2xx.x.x4    |00d 22:53:50|00d 00:00:00|4282 (2541)|0 (0)|2.1.4   |Total TV - Fox TV Serbia [NDS] (ok)           |  0.1099s |
    |aaaaaa      |xx9.x3.4x.2x   |00d 22:53:49|00d 00:56:41|1753 (1536)|0 (0)|2.1.4   |Total TV - OBN [NDS] (nok)                    |  0.1264s | 

+------------+---------------+------------+------------+-----------+-----+--------+----------------------------------------------+----------+

    +------------+---------------------------------------+
    | Username   | Shareinfo                             |
    +------------+---------------------------------------+
    |       user3|                                       |
    |      user33|                                       |
    |    user2222|                开发者_StackOverflow社区                       |
    |     user333|local 0d02:000000 6589(6588)           |
    |            |remote 0d02:000000 756(755)            |
    |            |remote 1802:000000 853(852)            |
    |            |local 1802:000000 50(50)               |
    |     user444|                                       |
    |       user3|local 091f:000000 2154(2147)           |
    |            |remote 091f:000000 394(394)            |
    |            |local 1802:000000 1734(0)              |
    |      USER22|local 091f:000000 1677(1509)           |
    |            |local 0d02:000000 4(3)                 |
    |            |local 1802:000000 70(22)               |
    |            |remote 091f:000000 1(1)                |
    |            |remote 1802:000000 1(1)                |
    |       USER1|local 0d02:000000 3359(3357)           |
    |            |remote 0d02:000000 165(165)            | 
    +------------+---------------------------------------+

How con we extract the two first columns of this file, that is, fields user and ip address, so that output would look like this:

user4  1x3.xxx.126.xx
useral 7x.xx.xxx.1xx 
userxxxx x7.xx5.xx.1x1 
someuse 9x.8x.xx0.xx4  
nameuse x8.xx3.6x.7x
blabl xx.2xx.x.x4    
aaaaaa xx9.x3.4x.2x     


BEGIN { FS = "|" }

/Last used share/ { next }

{ if(NF == 11) print $2, $3 }

Update: The operating instructions...

$ awk -f test.awk < cam.txt
user4        1x3.xxx.126.xx 
useral       7x.xx.xxx.1xx  
userxxxx     x7.xx5.xx.1x1  
someuse      9x.8x.xx0.xx4  
nameuse      x8.xx3.6x.7x   
blabl        xx.2xx.x.x4    
aaaaaa       xx9.x3.4x.2x   

Or, someprogram | awk -f test.awk


I recommand to apply the techniques developed in the answer to the question "How to parse rackspace big data api response using shell scripting”. In your special case, this yields:

# ccam_canonize
#  Canonize output of ccam report tools
ccam_canonize()
{
  sed -e '1,3d;/^+[-+]*$/d;s/^| //;s/ |$//;s/ | /|/g;/Shareinfo/,$d'
}

# ccam_extract
#  Extract user and IP
ccam_extract()
{
  awk -F'|' '{print($1,$2)}'
}

ccam_canonize | ccam_extract
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜