Before you can onboard a VNF onto a MANO software stack such as ONAP, you need to select a VNF vendor. While comparing the functionality of competing VNFs is well understood from the PNF days, comparing their performance is a lot more difficult for the following reasons:
Differences between vendor environments: The NFVI/VIM platform, its configurations and traffic generators used by various vendors are likely to be different. Similarly, the actual performance tests may differ significantly. This makes comparing VNFs based on vendor provided metrics very difficult. While present to some degree, this issue was not as pronounced in the pre-NFV world.
Differences from a real production environment. If a vendor uses 10 NICs and uses up 100% of CPU resources available to generate metrics, then these metrics are not useful for a real world deployment. Similarly, the tests also need to reflect real world traffic conditions to be useful, which may or may not be the case in vendor provided results. Similar to the previous issue, this problem is also heightened in the NFV era.
The OPNFV Yardstick Network Service Benchmarking (NSB) tool is useful in solving the above problem. It runs performance tests on a VNF or entire network service. A CSP can thus use a consistent NFVI/VIM platform that reflects their production environment (could be an OPNFV scenario if there is desire to keep the validation platform vendor agnostic). The performance tests can also be written in a vendor agnostic manner. This methodology may be used to compare vendors in a consistent manner, validate VNFs and characterize their performance. NSB is fully automated, so can be plugged into a CI pipeline as well.