not an expert on that but some obvious considerations. The question it's not just about performance it's also about keep the server on or not. You'll agree that you could load a server with many HD, GPGPU or whatever and the power consumption will not be the same in each case. If you only have a couple of HD on your RX200 S6 you can work fine with a 450W PSU but if you plan to use all the HD or installs something with a great power consumption (I'm thinking for exemple on a highend GPGPU that can use 150 to 250W for it's own) you will probably have not enough power with just one 450W PSU and that will overcome on some nice stops on the server cause of fail power problems.
Imagine then some redundancy cases, a server with some pick consumptions of 500W and your exemple, one 450W and one 770W PSU. You will have power enough to run the server and somekind of failover protection. You could handle the fail of the 450W PSU with no problem but possible you will have a great problem on the 770W PSU fail cause the 450W has no power enough to run the server in some moments. Think that a server not always have the same power consumption, more load, more power comsumption. That will overcome on a server stop not just a performance issue. You can use iRMC console on your server to see real time power consumption on your server in order to have a good idea of which it's the appropiate PSU or PSUs needed for your server. Obviously look that on a heavy load scenario, then balance the money you can spend on that and fail over tolerance you want to implement and you'll have the answer to your question.
Hope that helps you,
Fujitsu's Select Partner