As for disabling the on-die GPU, it should be assignable from within Windows via the nVidia driver settings. Wonder if maybe because the system is defaulting to the AMD, it isn't coming up properly. After a little googling, found out the A4-5xxx series do not play nice with dual graphics setups. You may be able to go into your motherboard BIOS and change the default graphics card that is used, if not flat out disable the on-board completely.
I understand that...why I linked to specs for both PC and Laptop hardware to try to demonstrate the loss you suffer when you use a castrated version of a card, even if you are moving to a newer family of GPU's. Didn't expect to find a direct comparison between a 2+ year old laptop GPU and a just released desktop GPU, so didn't bother trying. You mentioned considering moving to a 750 Ti for increased performance--which may sorely disappoint you because you are staying in the same class of card. The point I was trying to make is that just because you moved from a 600 series laptop model to a 700 series PC model doesn't automatically mean you will get a monster increase in frame rate. Each family has cards designed/intended for different uses. One end is more for casual use, uses less power, and typically costs less. The other has the high performance card that is power hungry and costs considerably more.
You have a low power version of the 700 series, and thus is going to be on the lower side of the scores for that family of cards and you may need to adjust your expectations a bit. When you compromise and wind up with things like lower stream processing count, narrow bus width, and such, it can impact how efficiently you are applying various effects. 750's are 128-bit, low power 55/60 watt cards. The low end OEM 192-bit 760 card is a 130W, while Retail 256-bit is rated 170W. With a card (and possibly the system as a whole) designed more towards the HTPC market, you may find yourself needing to scale back graphics settings as a trade off to get the higher frame rates with that card.
And that benchmark is static, standalone content. It also measures load times as part of the score--it isn't designed to measure rendering during online game play. So it isn't the best comparison to make against the live online environment. I scored 5382 on an old C2D system with an ATI 4850 and only 4GB of DDR2 memory--a system thrown together 5 years agoe, with some components dating back to 2006. By notching quality down to Standard Desktop, I was able to hit 6620 on that same system. But it didn't fair nearly as well as expected when I went live--got about half the frame rate (the benchmark does track your frame rate, but even that is misleading unless you watch it closely AND look at the end result details). During the benchmark, your system isn't also dealing with exchanging all that web traffic that contains things like other custom character data and their actions. It isn't dealing with all those online dynamics like increased character count, delayed server responses that dictate rendering of character actions and such--probably doesn't even load the entire zones like it does in-game. The live play environment is simply stressing various subsystems much more than the benchmark's "on rails" test. So the offline performance is not indicative of online performance for this game.
Here's something that may prove a little more telling about your processor though... open an elevated command prompt (right-click CMD and run as admin) and run these two commands to get some scores on the CPU from the Windows Experience Index details:
winsat cpu -encryption
winsat cpu -compression
If they aren't returning something around the 250/500 mark or better, your CPU may well be a bit of a bottleneck as well.