[ixpmanager] peer-to-peer statistics | sFlow

Vladislav Leontjev vladislav.leontjev at gmail.com
Fri Mar 22 08:57:18 GMT 2019


Sorry proccess was missed,
Waited for sometime and here they are:

# ./sflow-to-rrd-handler --macdbtype=configured --debug
DEBUG: dropped update for: protocol: 4 vlan: 80 srcvli: 1 dstvli: 0
pktsize: 1522 samplerate: 8192
DEBUG: rejected:
FLOW,10.81.7.25,17,0,0007ebea2860,00155d5c6135,0x0800,80,0,10.81.15.50,10.81.15.52,1,0x00,255,8,0,0x00,1522,110,8192
DEBUG: starting rrd flush at time interval: 6535.309006, time: 1553244422
DEBUG: individual: building update for vlan: 80 type: bytes protocol: 4
srcvli: 1
DEBUG: individual: building update for vlan: 80 type: bytes protocol: 4
srcvli: 3
DEBUG: aggregate: building update for vlan: 80 type: bytes protocol: 4
file:
/var/www/ixp_p2p/ipv4/bytes/aggregate/aggregate.ipv4.bytes.vlan00080.rrd
DEBUG: individual: building update for vlan: 80 type: pkts protocol: 4
srcvli: 1
DEBUG: individual: building update for vlan: 80 type: pkts protocol: 4
srcvli: 3
DEBUG: aggregate: building update for vlan: 80 type: pkts protocol: 4 file:
/var/www/ixp_p2p/ipv4/pkts/aggregate/aggregate.ipv4.pkts.vlan00080.rrd
DEBUG: individual: building update for vlan: 80 type: bytes protocol: 6
srcvli: 1
DEBUG: individual: building update for vlan: 80 type: bytes protocol: 6
srcvli: 3
DEBUG: aggregate: building update for vlan: 80 type: bytes protocol: 6
file:
/var/www/ixp_p2p/ipv6/bytes/aggregate/aggregate.ipv6.bytes.vlan00080.rrd
DEBUG: individual: building update for vlan: 80 type: pkts protocol: 6
srcvli: 1
DEBUG: individual: building update for vlan: 80 type: pkts protocol: 6
srcvli: 3
DEBUG: aggregate: building update for vlan: 80 type: pkts protocol: 6 file:
/var/www/ixp_p2p/ipv6/pkts/aggregate/aggregate.ipv6.pkts.vlan00080.rrd
DEBUG: flush completed at 1553244423
DEBUG: dropped update for: protocol: 4 vlan: 80 srcvli: 1 dstvli: 0
pktsize: 1522 samplerate: 8192
DEBUG: rejected:
FLOW,10.81.7.25,17,0,0007ebea2860,00155d5c6135,0x0800,80,0,10.81.15.50,10.81.15.52,1,0x00,255,8,0,0x00,1522,110,8192
DEBUG: starting rrd flush at time interval: 157.036471, time: 1553244579
DEBUG: individual: building update for vlan: 80 type: bytes protocol: 4
srcvli: 3
DEBUG: individual: building update for vlan: 80 type: bytes protocol: 4
srcvli: 1
DEBUG: aggregate: building update for vlan: 80 type: bytes protocol: 4
file:
/var/www/ixp_p2p/ipv4/bytes/aggregate/aggregate.ipv4.bytes.vlan00080.rrd
DEBUG: individual: building update for vlan: 80 type: pkts protocol: 4
srcvli: 3
DEBUG: individual: building update for vlan: 80 type: pkts protocol: 4
srcvli: 1
DEBUG: aggregate: building update for vlan: 80 type: pkts protocol: 4 file:
/var/www/ixp_p2p/ipv4/pkts/aggregate/aggregate.ipv4.pkts.vlan00080.rrd
DEBUG: individual: building update for vlan: 80 type: bytes protocol: 6
srcvli: 3
DEBUG: individual: building update for vlan: 80 type: bytes protocol: 6
srcvli: 1
DEBUG: aggregate: building update for vlan: 80 type: bytes protocol: 6
file:
/var/www/ixp_p2p/ipv6/bytes/aggregate/aggregate.ipv6.bytes.vlan00080.rrd
DEBUG: individual: building update for vlan: 80 type: pkts protocol: 6
srcvli: 3
DEBUG: individual: building update for vlan: 80 type: pkts protocol: 6
srcvli: 1
DEBUG: aggregate: building update for vlan: 80 type: pkts protocol: 6 file:
/var/www/ixp_p2p/ipv6/pkts/aggregate/aggregate.ipv6.pkts.vlan00080.rrd
DEBUG: flush completed at 1553244579


пт, 22 мар. 2019 г. в 10:01, Vladislav Leontjev <
vladislav.leontjev at gmail.com>:

> Hello Nick,
>
> I appreciate your help very much, thank you.
>
> 1. grapher.php ok got it back! :)
>
> Is it ok for the output of rrdtool dump
> p2p.ipv4.bytes.src-00001.dst-00003.rrd:
> I mean flow is reading the data correctly? (there's only one bgp
> session(to a testing bird rs) with no tarffic for now (from every peering
> client/port) and I've got two testing peers) or it's an error;
> ......
>                         <!-- 2019-03-21 02:00:00 EET / 1553126400 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-03-21 04:00:00 EET / 1553133600 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-03-21 06:00:00 EET / 1553140800 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-03-21 08:00:00 EET / 1553148000 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-03-21 10:00:00 EET / 1553155200 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-03-21 12:00:00 EET / 1553162400 -->
> <row><v>NaN</v><v>NaN</v></row>
>                 </database>
>         </rra>
>         <rra>
>                 <cf>MAX</cf>
>                 <pdp_per_row>24</pdp_per_row> <!-- 7200 seconds -->
>
>                 <params>
>                 <xff>5.0000000000e-01</xff>
>                 </params>
>                 <cdp_prep>
>                         <ds>
>                         <primary_value>NaN</primary_value>
>                         <secondary_value>0.0000000000e+00</secondary_value>
>                         <value>0.0000000000e+00</value>
>                         <unknown_datapoints>3</unknown_datapoints>
>                         </ds>
>                         <ds>
>                         <primary_value>NaN</primary_value>
>                         <secondary_value>0.0000000000e+00</secondary_value>
>                         <value>0.0000000000e+00</value>
>                         <unknown_datapoints>3</unknown_datapoints>
>                         </ds>
>                 </cdp_prep>
>                 <database>
>                         <!-- 2019-01-18 02:00:00 EET / 1547769600 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-18 04:00:00 EET / 1547776800 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-18 06:00:00 EET / 1547784000 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-18 08:00:00 EET / 1547791200 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-18 10:00:00 EET / 1547798400 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-18 12:00:00 EET / 1547805600 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-18 14:00:00 EET / 1547812800 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-18 16:00:00 EET / 1547820000 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-18 18:00:00 EET / 1547827200 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-18 20:00:00 EET / 1547834400 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-18 22:00:00 EET / 1547841600 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-19 00:00:00 EET / 1547848800 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-19 02:00:00 EET / 1547856000 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-19 04:00:00 EET / 1547863200 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-19 06:00:00 EET / 1547870400 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-19 08:00:00 EET / 1547877600 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-19 10:00:00 EET / 1547884800 -->
> <row><v>NaN</v><v>NaN</v></row>
>                         <!-- 2019-01-19 12:00:00 EET / 1547892000 -->
> <row><v>NaN</v><v>NaN</v></row>
>
> This graph indicates that the RRD files are being generated, but that
> they contain no data.  Can you run "sflow-to-rrd-handler --debug" and
> see if the output contains lines like "DEBUG: accepted update for: ..."?
>
> #./sflow-to-rrd-handler --macdbtype=configured --debug (using
> --macdbtype=configured as you suggested previously cause of FATAL: could
> not read IXP Manager API call on
> http://10.81.7.10/api/v4/sflow-db-mapper/learned-macs)
> It simply hangs without any output; Can it be cause of minimal(1 bgp
> session) or none traffic being there as I've mentioned above?
>
> P.S. Only in a week I'll have an ability to move all this stuff into the
> real production SW; Before that I want to get it (IXP manager cfg, sFlow)
> work here (in a lab).
> I want to understand: Does it from configuration perspective and sFlow
> everything functioning as it should be? I mean: As far as traffic will be
> high enough I'll see graphs as it should be?
>
> Thank you.
>
> Vladislav.
>
> чт, 21 мар. 2019 г. в 23:25, Nick Hilliard <nick at foobar.org>:
>
>> Hi Vladislav,
>>
>> Vladislav Leontjev wrote on 20/03/2019 14:29:
>> >
>> ----------------------------------------------------------------------------
>> >   /srv/ixpmanager/config/grapher.php
>> >
>> -----------------------------------------------------------------------------
>> >
>> >          'sflow' => [
>> >              // show sflow / p2p links on the frontend
>> >              'enabled' => env( 'GRAPHER_BACKEND_SFLOW_ENABLED', false ),
>> >
>> >              // for larger IXPs, it's quite intensive to display all
>> the
>> > graphs
>> >              'show_graphs_on_index_page' => env(
>> > 'GRAPHER_BACKEND_SFLOW_SHOW_ON_INDEX', false ),
>> >
>> >              // where to find the MRTG rrd files
>> >              'root'  => env( 'GRAPHER_BACKEND_SFLOW_ROOT',
>> > 'http://10.81.7.10/grapher-sflow/' ),
>> >          ],
>>
>> oops, you shouldn't change this file under any circumstances.  Can you
>> reset this file using "git checkout"?
>>
>> Any customisation you might need to make can be handled in .env.
>>
>> > ------------------------------------------------------------------------
>> > /srv/ixpmanager/.env
>> > ------------------------------------------------------------------------
>> > GRAPHER_BACKENDS="mrtg|sflow"
>> > GRAPHER_BACKEND_SFLOW_ENABLED=true
>> > GRAPHER_BACKEND_SFLOW_ROOT="http://10.81.7.10/grapher-sflow"
>> > GRAPHER_ACCESS_CUSTOMER=0
>> > GRAPHER_ACCESS_P2P=1
>> >
>> > ----------------------------------------------------------------
>>
>> that looks correct.
>>
>> > Files have been gathered as it should be;
>> > #/var/www/ixp_p2p/ipv4/bytes/p2p/src-00001$ ls
>> > p2p.ipv4.bytes.src-00001.dst-00003.rrd
>>
>> You should use "rrdtool dump p2p.ipv4.bytes.src-00001.dst-00003.rrd" to
>> ensure that sflow is reading the data correctly.
>>
>> > I have got 2 peering ports UP and till now 2 rrd files accordingly;
>> > will leave "./sflow-to-rrd-handler --macdbtype=configured" for sometime
>> > to gather more of them;
>> >
>> > ------------------------------------------------------------------
>> > local install of apache: added lines (used apache the same which was
>> > installed with IXP manager)
>> > ------------------------------------------------------------------
>> > /etc/apache2/apache2.conf
>> >
>> > Alias /grapher-sflow /var/www/ixp_p2p
>> > <Directory "/data/ixpmatrix">
>> >          Options None
>> >          AllowOverride None
>> > </Directory>
>> > ----------------------------------------------------------------
>> > shows the content of /var/www/ixp_p2p with all of its subdirs
>> > http://10.81.7.10/grapher-sflow/
>> > visual prtn-src:of apache https://prnt.sc/n0ijkk
>>
>> Correct.
>>
>> > -------------------------------------------------------------
>> > /usr/local/etc/ixpmanager.conf
>> > -------------------------------------------------------------
>> >
>> > <sql>
>> >          dbase_type      = mysql
>> >          dbase_database  = ixpmanager
>> >          dbase_username  = ixpmanager
>> >          dbase_password  = *
>> >          dbase_hostname  = 127.0.0.1
>> >          #dbase_portname = /tmp/mysql.sock
>> > </sql>
>> >
>> > <ixp>
>> >          sflow_rrdcached = 0
>> >          sflowtool = /usr/local/bin/sflowtool
>> >          sflowtool_opts = -4 -p 6343 -l
>> >          sflow_rrddir = /var/www/ixp_p2p
>> > #       debug = 1
>> >          apikey = *
>> >          apibaseurl = http://10.81.7.10/api/v4
>> >          macdbtype = configured
>> > </ixp>
>> >
>> > Do I understand wright:
>> >
>> > 1. If I see some page NOT with "dummy" and NOT with "no image found" it
>> > means it's normally tries to interpreter sflow
>> > converted *.rrd's? -> visual prtn-src:of vlan graph:
>> https://prnt.sc/n0im4x
>>
>> The "dummy" / "no image found" are only for mrtg, not sflow.
>>
>> > 2. As far as there will be enough *.rrd files the graph
>> > visual prtn-src:of vlan graph: https://prnt.sc/n0im4x
>> > will be shown with in/out statistics at the bottom as it should be?
>> > 3. GRAPHER_BACKENDS="mrtg|sflow" - tries to use mrtg -> failed -> then
>> > sflow?
>>
>> no, it means that the Grapher back-end uses both mrtg and sflow.  These
>> are separate things: the MRTG grapher produces interface graphs and the
>> sflow grapher produces point-to-point graphs.
>>
>> > 4. from log means: production.ERROR: No backend available to process
>> > this graph
>> > It's because of mrtg is not installed?
>> > 5. Do I understand documentation in a wright way: Overall peering graph
>> > is available only by mrtg? I need to implement it too them?
>>
>> yes, you need to install mrtg if you want regular interface graphs.
>>
>> Nick
>>
>> >
>> > Thank you.
>> >
>> > Vladislav.
>> >
>> > ср, 20 мар. 2019 г. в 11:00, Nick Hilliard <nick at foobar.org
>> > <mailto:nick at foobar.org>>:
>> >
>> >     Vladislav Leontjev wrote on 20/03/2019 06:42:
>> >      > ./sflow-to-rrd-handler --macdbtype=configured
>> >      >
>> >      > runs whitout any error output; In my understanding I should then
>> get
>> >      > rrd files into my mentioned dir in cfg?
>> >      >
>> >
>> >     Yes, if there are MAC addresses defined in the database, you should
>> see
>> >     .rrd files in a bunch of subdirectories in /var/www.
>> >
>> >     If you're not seeing any files in there, then you need to use
>> "--debug"
>> >     to see what's happening.
>> >
>> >      > I'm using V4.9.2; In my situation (till next bug-fix) the way to
>> get
>> >      > *.rrd is ./sflow-to-rrd-handler --macdbtype=configured ? or did I
>> >      > mess up something?
>> >     Just stick with v4.9.2 for the moment.
>> >
>> >     Nick
>> >
>> > _______________________________________________
>> > INEX IXP Manager mailing list
>> > ixpmanager at inex.ie
>> > Unsubscribe or change options here:
>> https://www.inex.ie/mailman/listinfo/ixpmanager
>> >
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.inex.ie/pipermail/ixpmanager/attachments/20190322/713d091c/attachment-0001.html>


More information about the ixpmanager mailing list