[ixpmanager] hardware specification recommended

Basil Elbalaawi saifbasilyazan at gmail.com
Fri Nov 6 21:50:35 GMT 2020


Thank you for the information.

But which preferred switch  adaptation with ixp manager , Nexus 9500 Series
<https://www.cisco.com/c/en/us/products/switches/nexus-9000-series-switches/index.html>
 or OmniSwitch 9900
<https://www.al-enterprise.com/en/products/switches/omniswitch-9900>
(*Alcatel-Lucent
Enterprise <https://www.al-enterprise.com/>*)?
or any switch supported by ixp manager application ?

thanks


   -




On Fri, Nov 6, 2020 at 2:38 PM Nick Hilliard (INEX) <nick at inex.ie> wrote:

> Hi Basil,
>
> We would recommend 2x route servers and 1x route collector.
>
> The route servers provide bilateral peering, and we view this as part of
> the IXP core infrastructure, so we run two of these.  If you have two, it
> means you can run maintenance on one while keeping the other in service.
> I.e. the overall peering service will be more reliable.
>
> The route collector is for debugging, and can be used by organisations who
> don't want to connect to the route servers.
>
> We would also recommend 2x RPKI servers for the same reason.
>
> You should have a quarantine route collector to test new connections
> coming into the IXP.
>
> The sflow VM will need fast disks.  SSD is best.  This will only become a
> problem if you have lots of participants at the IXP, e.g. 50+
>
> The RPKI VM scale has nothing to do with the size of the IXP, so you're
> best to ask the Routinator people about that.
>
> INEX uses approximately the following configuration:
>
>
> *cpu*
>
> *memory*
>
> rs1
>
> 4
>
> 8G
>
> rs2
>
> 4
>
> 8G
>
> rc1
>
> 2
>
> 2G
>
> quarantine
>
> 1
>
> 2G
>
> monitor
>
> 4
>
> 4G
>
> sflow
>
> 2
>
> 4G
>
> ixpmanager
>
> 2
>
> 4G
>
> database
>
> 4
>
> 8G
>
> rpki01
>
> 2
>
> 2G
>
> rpki02
>
> 2
>
> 2G
>
> The physical hardware we run is 2x Dell r730 (Intel E5-2630v4, 10core
> servers, 96G memory).
>
> These VMs are mostly idle. The Hypervisor CPU load is usually around 10%.
> We run the route servers on physically separate hardware (Dell R320), just
> in case we have a problem with the primary management hypervisors.
>
> The important thing is that everyone's configuration will be different.
> These are all VMs, so if you need to add or remove memory / CPUs, that can
> be done very easily.
>
> Nick
>
> Basil Elbalaawi <saifbasilyazan at gmail.com>
> 5 November 2020 at 16:59
>
> Dear Support
>
>
>
> I have almost finished testing the ixp manager full stack system, and will
> test the sflow next week, so if you can help me to be selected the hardware
> specification for every service supported to the system Specially about
> memory, CPU core numbers, storage size for every service? , can you note
> the following services separated to multiple virtual machine like:
>
>
>
> VM1: ixpmanager web front-end VM , and database VM
>
> VM2: monitoring VM (mrtg / nagios)
>
> VM3: sflow VM
>
> VM4: 1 rpki VMs (Routinator 3000),
>
> VM5: 1 route servers with Birdeys.
>
>
>
> My server is “HPE ProLiant DL360 Gen10 8SFF” with Esxi hypervisor, all our
> traffic internal about PSIX- Ramallah with 16 members is from 5-10 Gbps.
>
>
>
>
>
> Memory size
>
> CPU core numbers
>
> storage size
>
> VM1
>
>
>
>
>
>
>
> VM2
>
>
>
>
>
>
>
> VM3
>
>
>
>
>
>
>
> VM4
>
>
>
>
>
>
>
> VM5
>
>
>
>
>
>
>
>
> Thanks advanced,,,,,
>
>
>
> <http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
> www.avg.com
> <http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
>
>
> _______________________________________________
> INEX IXP Manager mailing list
> ixpmanager at inex.ie
> Unsubscribe or change options here:
> https://www.inex.ie/mailman/listinfo/ixpmanager
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.inex.ie/pipermail/ixpmanager/attachments/20201106/5f457a51/attachment-0001.htm>


More information about the ixpmanager mailing list