Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Showcase


Channel Catalog


Channel Description:

Latest Forum Threads in VMware Communities

older | 1 | .... | 6689 | 6690 | (Page 6691) | 6692 | 6693 | .... | 6759 | newer

    0 0

    Hi,

     

    I need to integrate vcenter with EMC recover point. During the integration it require vcenter certificate.

     

    Please let me know the exact location for certificate in vcenter 5.5 appliance.

     

    Regards,

    Hakim. B


    0 0

    Hello Expert,

     

    I am unable to add vsphere endpoint inside Vra 7.5. Please find below vSphereAgent error log . Please help me resolve this issue

     

    vcac: [component="iaas:VRMAgent.exe" priority="Debug" thread="5968"] [sub-thread-Id="5"  context=""  token=""] Ping Sent Successfully : [<?xml version="1.0" encoding="utf-16"?><pingReport agentName="Vcenter agentVersion="7.5.0.0" agentLocation="Vraagn01" WorkitemsProcessed="0"><Endpoint /><ErrorCode>ENDPOINT_NOT_FOUND</ErrorCode><ManagementEndpoint Name="Vcsa.vralab.com" /><Nodes /><AgentTypes><AgentType name="Hypervisor" /><AgentType name="vSphereHypervisor" /></AgentTypes></pingReport>]


    0 0

    Scan reading through the vCenter 6.7 release notes - a key thing that caught my eye was that we no longer have to worry about complex external PSC and load balancer configurations to get linked mode - Hooray!

     

    I have two questions about it if anyone knows….

     

    1. To move from a vSphere 6.0 multi-site, 2 x PSC’s in an HA config with custom certs + 1 vCenter per site config, to the single embedded vCenter 6.7 per site model, am I right in thinking that we’d have to complete a full upgrade to 6.7 first, and then deploy a new embedded appliance, and backup and restore vcenter into the new embedded appliance?

     

    Seems pretty brutal, but I really do despise the whole PSC / HA / HLB setup, would be glad to get away from it.

     

    2. Aside from SSO performance, what would be the downsides of moving from an external to an embedded model with 6.7?

     

    Thanks


    0 0

    I've been scratching my head over this one for a bit trying to get vRealize Orchestrator to properly get/set/update vCD metadata against a vApp/VM. How do I determine the type of metadata value? All of the examples I can see rely on the 'old' vCD API where the only valid metadata type was 'String'.

     

    With 'testVm' set to a VM object correctly I can use the following to see all the 'String' metadata keys and values:

     

    var metadata = testVm.getMetadata();

    var metadataEntries = metadata.getTypedEntries().enumerate();

    for each (var metadataEntry in metadataEntries) {

         var value = metadataEntry.typedValue.getValue(new VclMetadataStringValue).value;

         System.log("Metadata key: " + metadataEntry.key + " value: " + value);

    }

     

    This code works fine for metadata where the data type is 'String' (since using VclMetadataStringValue), but doesn't retrieve any keys/values from metadata fields using the other 3 data types (boolean, date/time or number). What expression can I use to determine the type of the metadataEntry so I can handle it appropriately?

     

    Thanks in advance, Jon.


    0 0

    I have deployed ghettoVCB using the VIB onto an ESXi 6.0 host a couple months ago, and recently I've tried putting it on a new 6.5 host.  The new 6.5 host is still in the process of being stood up, so I can reboot it at will for testing, while the 6.0 host is running some VMs that people would like me to leave running (and which are being backed up currently, so long as the vhost doesn't get rebooted!).  Short version: I'm finding that each reboot undoes any and every change I make outside of the VIB install, and I'm looking for some advice on how to commit my changes such that they survive a reboot.

     

    Longer ramble: I've come to realize that a reboot seems to wipe everything out that I add to the ESXi file system, outside of installing the VIB.  I can modify the .conf file, or create a derived copy, or even modify the settings in the .sh, and ghettoVCB works smoothly until that next reboot happens and my changes are reverted to the VIB as it installed.  Is  there a way to make my configuration changes "stick"?  I even tried removing the VIB, copying the files to a ghettoVCB folder, and running from there; again it works until my folder is obliterated by the next reboot.  I don't love the idea of having to build my own VIB just to inject a conf file that won't get blown away.  I don't want to store everything on a datastore because I there are other hands in the mix that might decide to relocate said folder, and it's easier to just leave it hidden in the terminal session where they can't/don't browse.

     

    What options do I have, or do I absolutely need to deploy a Docker VM for no purpose other than to build my own VIB just to save changes without resorting to putting ghettoVCB onto the local  datastore?


    0 0

    AutoVM is an open source Virtual Private Server (VPS) manager based on in VMware ESXI, gives you full control and automation of hosting companies and VPS Sellers.

    With AutoVM you can assign unique panel for each user to make them fulfilled all about VPS Related.

    This must be noticeable as AutoVM additional tools , except Automatic Monitoring you can give your billing managements to AutoVM! So you can give more-fast services to your customers.

     

    Some of the features


    • Bandwidth monitoring and manage VPS traffic usage.
    • Install easily without any changes on the ESXI servers.
    • Free modules for manage VPS on the WHMCS client area.
    • Auto Provisioning VM After Payment Successfully.
    • Auto assign IP and Network adapter once VM created.
    • Auto installation of the operating system.
    • Ability to assign the existing VM created for WHMCS users.

     

    Prerequisite


    the AutoVM platform is designed to be compatible with default VMware ESXI settings and does not require any changes on the network design. To launch the AutoVM platform, you can run it on the hosting control panel such as CPanel or DirectAdmin.

     

    Get Started with free licence


    To get and setup the system, please visit the installation article. If you have any questions, please read the FAQ section. If you do not find your answer, please contact us from Client area.

     

    Screenshot_۲۰۱۸۰۲۰۴_۱۱۴۲۵۴.png


    0 0

    Dear Team,

     

    We have reinstalled the vROPS in our VDI environment with same name and same IP.

     

    VROPS version 6.2

     

    Now i am trying to reinstall the vRealize Operations for Horizon broker agent on our Connection broker server and after installation where it asks for pairing , it is showing as already paired with vROPS server and when we try to pair it again, it is getting crashed.

     

    We have tried uninstalling and deleting the vrops agent folder and than installing again. It still fails to pair and gets crashed.

     

    Need urgent help in this.

     

    Thanks in Advance.

     

    Regards,

    Rajesh


    0 0
  • 12/23/18--03:27: NSX performance issues
  • Hi all, I have an NSX home lab running. Here a basic overview of the setup:

     

    My PC is on the 192.168.1.x/24 network. UniFi USG as the gateway

    ESXi Hosts, vCSA, NSX Manager, NSX Control cluster are on the 10.0.0.0/24 network, tagged VLAN 2, on a 10Gbit switch. Also on this network (and switch) is a QNAP NAS. Again, using the UniFi USG as the gateway.

    I don't think it matters but I am running vSAN and they are using a directly connected network for vSAN and vMotion. Witness traffic is tagged on the 10.0.0.0/24 VMk where the witness appliance resides.

    I have two logical networks, 5001 and 5002, 172.16.0.0/24 and 172.16.10.0/24 respectively.

    I have one Edge gateway. This has an interface for 5001 and 5002. It also has an interface on the VLAN 2 port group for external traffic.

    The VMs on the 5001 and 5002 networks use the edge as their gateway. The edge uses the UniFi USG as it's gateway.

    I then have a static route on the UniFi USG which directs 5001 and 5002 traffic to the interface on the VLAN 2 port group of the Edge.

     

    Not the most complex of setups I don't think. I wasn't sure if I needed an Edge for each logical network but it's working fine with just the single one.

     

    Running iPerf tests from host to host I get the expected 10Gbps speed.

    Running iPerf tests from the host to the QNAP NAS I get 10Gbps.

    Running it from my PC to a VM on a logical network I get 1Gbps (PC is only 1Gbit, as is the UniFi USG).

     

    The issue that I am having is RDP performance from my PC to a VM is poor, it's like it's on 10 frames a second. It does this if the VM is in the VLAN 2 port group of if it's connected to either logical switch.

     

    I'm guessing here that it's the UniFi USG causing the issues? I do have a pfSense appliance I could try I guess.

     

    The second issue I am having is if I do an iPerf test between VMs, either on the same logical network or seperate networks, traffic appears limited, peaking around 5Gbps but around 2-3 average.

     

    This leads me to believe that the issue is in the edge configuration somehow, or is this normal behaviour? I'd have thought I would see the full 10Gbps.

     

    Thanks!


    0 0

    Hi All,

     

    While checking in the session tab in Horizon View console, i can see single user has multiple sessions(Once connected and second session in disconnected state) for the same dedicated VDI VM.

    Need to understand the reason behind it and to understand will this cause any issue.

     

    Also in this scenario will user will connect to same session or new session will be created.

     

    Thanks in Advance.

     

    Regards,

    Rajesh


    0 0
  • 09/25/15--05:42: VMotion getting failed.
  • Dear Team,

     

    I have VCenter 5.5 Server and esxi version are 5.0 and 5.5, i am trying to migrate VM from one cluster to another cluster on which EVC has been enabled, some of the host from my HP Gen 8 esxi host got migrated successfully to EVC enabled cluster which has gen 6, gen 7 and gen 10 host, but some of the VM from the same host failed to migrate giving  below error.

     

    vmotion error1.JPG

     

    Please help me understand the reason for few VM's from the same host got migrated and for few VM's showing vmotion failed error and kindly provide me the solution for resolving this issue.

     

    Regards,

    Rajesh

    rajjesh.poojary@gmail.com


    0 0

    Dear Team,

     

    We have a requirement, wherein we have to send logs from VMware View Server kept at custom path to Vmware Loginsight.

     

    We have installed Loginsight agent on server and it is forwarding few logs , but not all.

     

    So we need steps to send custom path logs to vRealize Loginsight.

     

    Thanks in Advance.

     

    Regards,

    Rajesh


    0 0

    Hi

     

    Really weird problem.

    When changing location of the windows page file and rebooting, windows will give the error: "windows created a temporary paging file on your computer because of a problem that occurred".

     

    I have tried pretty much every combination of disk controller/MBR/GPT/Allocation unit Size/Page file size there is.

     

    Anyone tried or able to reproduce the issue?

     

    /Kristian


    0 0
  • 12/23/18--11:36: Unable to start VMs
  • I have Workstation Pro 15 installed on my Ubuntu 18 machine.  When I try to start an imported VM, it gives me the following error:

     

    "Could not open /dev/vmmon: No such file or directory.

    Please make sure that the kernel module `vmmon' is loaded."


    0 0

    I'm trying to upgrade my VCSA from 6.5 to 6.7 and it fails the Stage 2 Pre Upgrade Checks with a pretty unhelpful:

     

    "Cannot validate target appliance configuration as not enough information from the source appliance can be collected. For more details check out the server logs"

     

    I've attached the upgrade logs, can anyone assist with this? Thanks


    0 0

    I have a 2018 i9 MacBook Pro with the Vega 20 Chipset. Running VMWare Fusion Pro 11.

     

    Host

    32GB of RAM

    i9

    Vega 20 w/4GB

    Running Mojave 10.14.2

     

    When using Windows 10 as a guest Configured with

    8GB Ram

    4 Cores

    In Full Screen set to use all Screens.

     

    If I set Accelerated Graphics to "Always use..." Windows runs very quickly and is quite usable... For about 15 minutes. After which, the Machine locks up and the only way I can recover is with a hard reboot. This occurs if I assign it either 1GB memory or 3GB of Video Memory.

     

    I have attempted to SSH into my Mac and Kill VMWare, but thing will give my screens back and switch back to the desktop except for a hard reboot.

     

    I'm not sure if this is a Windows Guest Problem, VMware Fusion Problem, or a Mojave problem.

     

    Anyone seeing the same thing?


    0 0

    Bonjour,

     

    Je vais devoir changer mon adressage LAN.

     

    Pas de problèmes pour mes postes ou mes routeurs, mais comment puis-je faire pour mon infra VMWare.

     

    Je dispose de :

    2 ESXi

    1 Vcenter sous W2008

    1 VSphere sous Linux.

     

     

    Comment puis-je procéder pour changer ces 4 adresses IP ?

     

    Merci pour votre aide.


    0 0

    So ive come across a werid issue in my new test system (just a test setup, esxi 6.5u2 , host only, no vCenter):

     

    I have a 2.5" p3700 nvme disk attached to my supermicro x10 based system (its direct attached via a sm riser card= AOC-2UR8N4-i2XT , which has 4x nvme ports , which goes to a sm nvme backplane, i dont think this HW is relevant though).

     

    I had been running the system great for about 2 weeks, i had a datastore created direct on the nvme disk with several VMs running from it.

     

    Yesterday my power flickered and when it came back my nvme data store was gone, and under storage->adaptors the nvme "hba" wouldnt show up  I tried removing and reinserting the nvme disk, trying a different nvme bay, shutdown/rebooting host a few times. nothing.

     

    However, this entire time i could see the disk via gui, manage->HW->PCI devices (see image) and also via ssh LSPCI:

    0000:08:00.0 Mass storage controller: Intel Corporation DC P3700 SSD [2.5" SFF]

     

    assuming the flash was fried or something, i boot the sys into a ubuntu live CD, and under the disks utility, there is the p3700 disk, with its proper 1.6tb VMFS partition intact.

     

    I then update/patch from 6.5U2 (may2018) to 6.5u2 (latest patches ~ nov1 2018). Reboot, still no nvme show up.


    I happened to attach a random sata disk (to a MB sata port), format it as a datastore (VMFS), then reboot, and BOOM the nvme datastore is back! if i remove the sata disk, reboot (so that the nvme is the only attached disk appear to esxi), the NVMe again wont appear! 

     

    So it seems that as long as i have some kind of other disk attached, my nvme appears properly.

     

    Any ideas what this is about?

     

    thanks!


    0 0

    I can't get VMware Player to install Windows 10 x64 on a Ubuntu 18.04 host. Player tells me that it can't find the operating system. I have the iso file and I've check use iso file under CD/DVD. Just as a sanity check I tried installing Windows 10 using VirtualBox and it does find the iso image.


    0 0
  • 12/23/18--15:10: Export Results to csv
  • Hi All,

    I'm new using vRO and I'd like to modify the following workflow to be able to export the results into a csv file and send it to a shared location.

    Thanks a lot!


    0 0

    I am confused and wondering if the fusion 11 pro private network vmnet blocking windows DHCP server DORA?

    I tried  ARP, ipconfig release and renew...nothing works.

     

    Here is my scenario:

     

    I am using Fusion 11 pro.

     

    VMware fusion custom network vmnet03

    I unchecked provide addresses on this network via DHCP.

     

    1. Win Server 2016 ( AD with DHCP and DNS role).  connected to  vmnet03 network

    2.      Win 10 pro.  connected to vmnet03 network

     

    SERVER configuration

    • static IP with 192.168.1.2 /24 default
    • gateway 192.168.1.2
    • DNS 127.0.0.1 (localhost)

    DHCP scope: 192.168.1.10 to 254

     

    Client win 10 pro configuration

    • obtain  IP address automatically
    • obtain  DNS address automatically

older | 1 | .... | 6689 | 6690 | (Page 6691) | 6692 | 6693 | .... | 6759 | newer