Quantcast
Channel: VMware Communities : Discussion List - All Communities
Viewing all 178257 articles
Browse latest View live

All the menu items are greyed, why?

$
0
0

Few days ago,  I set the vmware not to be started when os started.  Now, when I started vmware, all the menu items are greyed.

I checked the services, all the services started normally.    What shall I do to make the menu items usable?


All the menu items are greyed, why?

$
0
0

Few days ago,  I set the vmware not to be started when os started.  Now, when I started vmware, all the menu items are greyed.

I checked the services, all the services started normally.    What shall I do to make the menu items usable?

How can I stop action to execute when i start a workflow

$
0
0

Hi all,

 

I'm working in a workflow to connect or deconnect a ISO file to CDROM.

If the CDROM is connected to iso, I hide the disconnect section and show just the connection section.

If the CDROM is disconnected from ISO, I hide the connection section and show just the disconnected section.

My deal now is ,When I execute the workflow, I want to not execute an action linked to a string variable in presentatiobn menu who list all reportery and file in a datastore ISO

Once I run the workflow, It take some second because of the execution of the action.

 

I want to execute this action only when I choose to connect the CDROM to iso file.

Any  idea

Thanks

Screenshot - 2019-01-05 , 20_53_49.png

видиопамять windows XP

$
0
0

не получаетя увеличить видиопамять в windows XP

 

Wm ware player

wsfc環境のp2vについて

$
0
0

windows2008のwsfcで2対1でプリンタサーバのリソースをクラスタ環境で構築しています。

HW保守切れの延命でconverterを利用しp2vで仮想環境の移行を行いたいです。

クラスタリソースの再作成は負荷がかかるためできればwsfc環境を再作成することなくp2v移行したいのですが何か情報をお持ちの方がいれば教えてください。

Failed to Setup Trunk Between VMware ESXi 6.5 and HP Switch 5130 EI Series

$
0
0

Hi All,

 

I find difficulty when setup trunk between VMware ESXi and HP Switch 5130 Ei series. In my mind, when setup ESXi and Cisco switch this only simple thing, just setup trunk mode on switch port, and allow some VLANs, and voila it will work immediately.

 

In this case, I have ESXi host connected to 2 HP Switches (stacked already) with 8 cables/wires (4 wires per switch). In vSwtich (using standard vswitch), in NIC teaming, we setup to "Route based on IP Hash" like below:

 

 

in HP Switch, we setup BAGG (maybe something like etherchanel or LACP in Cisco),

 

Here is ss of BAGG group: Aggregation Mode Static

 

 

In Bridge Aggregation we setup trunk, and setup PVID 182 (this is vlan of ESXi host). Then, we allow all vlans to pass.

 

 

we also setup VLAN 182 in VMware ESXi DCUI.

 


But in the end, we still can not ping the ESXi host.

 

Many questions here:

 

- Please correct if you see something wrong with our configuration?

- Does VMware standard vswitch support BAGG (LACP) mode in HP switch?

- Does LACP support stacked physical switch?

 

Please support to figure out my problem. if you need any additional info we are happy to share.

 

any information will be very appreciated.

 

Best Regards,

Azlan Syah

NSX-v 6.4 BFD

$
0
0

Does NSX-v 6.4 Edge support BFD?

I would like to detect link failure between edge and physical router.

Thanks.

True SSO for CentOs 7 - Instant Clone agent initialization state error (1): failed (waited 10 seconds)

$
0
0

Hi!

I have

VC 6.7

Horizon 7.7

CentOs 7 1611

 

The master image joined to MS AD domain by sssd, all fine.

 

Then i read this:article to configure true SSO:

Configure True SSO on RHEL/CentOS 7.x Desktops

 

Then i create Floating Automated pool. When pool created, I see customizing status and then error:

Instant Clone agent initialization state error (1): failed (waited 10 seconds)

 

In AD, i see that objects are created.

 

I cant reed log /var/log/vmware/viewagent-ngvc.log, because the VM is very fast rebooting.

Any ideas?

Is this configuration supported?


Linux Permission issue with vmware shared folder

$
0
0

So I am using Kali image in VMware workstation. I am using shared folder from windows 10.The issue is that all the files/folders in the shared folder have executable permission. I did ( 0755 all folders and 0644 all files but it still does have the same permission )

 

 

 

 

2019-01-06_0200.png

All VMs Start with Black Screen

$
0
0

Recently moved all my VMs from Host Win7-64bit to Win8-64bit.

Fresh install of Player v 5.0.2 build-1031769

Computer is Acer Aspire M Intel i5 @ 1.8Ghz

Graphics is Intel HD 4000

 

When any of the various VMs are started from the Powered Off state, the player window will open and a big black screen will appear where the guest display should be shown. The guest OS does indeed boot, right up to the login screen although with no display visible. If I try to click on the close button in the upper right corner, I get a dialog box that says "The Virtual Machine is Busy". My only option is to end the vmplayer.exe task using task manager. Once I do that, the VM shows state as Powered On in the Library. If I Play it the 2nd time, the VM will resume operating correctly with the proper display at the immediate point where I expected it to be when I ended the vmplayer.exe task earlier.

Presence or absence of Tools in Guest does not affect the problem.

 

All 3 of my VMs have the exact same behavior. Guest operating Systems are: WinXP-SP3-32bit, Win7-64bit built from the VMware Converter, and Server 2008 from the MS eval ISO.

 

Any ideas? Once the play/end task/play sequence is performed the VM works correctly. Until it is shutdown or hibernated. A VM restart does not exhibit this symptom. The problem is reproducible with all VMs every time.

 

Provided vmx for the XP VM and a few logs.

Thanks in advance. I'm so close to having this work it's even more frustrating than if it didn't work at all.

Failed to start HACore profile on node

$
0
0

Hello,

 

Failed to deploy vCenter HA Cluster (vcenter 6.5.0 Build 5318154) with the following error:

 

"A general system error occured: Failed to start HACore profile on node 192.168.18.22" (192.168.18.22 is the IP address for the witness server filled up during the configuration wizard)

 

We tried the below with no luck:

  1. Power off and delete the Passive and Witness nodes.
  2. Log in to the Active node by using SSH or via Direct Console.
  3. Log in as the root user and enable the Bash shell: # shell
  4. Run the following command to remove the vCenter HA configuration: # destroy-vcha -f
  5. Reboot the Active node: # reboot
  6. Wait until the Active node is back online and start vCenter HA cluster configuration again

 

The following is the status of services from our vcenter:

 

Also there is a ping from our vcenter appliance to the above IP address:

 

 

Please advise,

My ESXi hosts (6.5) keeps ballooning it's RAM even with half my guests powered off

$
0
0

Hi,

 

I have a 4 x ESXi hosts that suddenly started to Balloon it's HOST RAM. The the main hosts have 512GB RAM each.

The one hosts has 40 logical processort running 23 guests. If powerd down some of the guests leaving me with

arround 13 active guests, the server still balloons the RAM.

 

If I look at the RAM usage per guest it is arround 200GB. that is with 23 guests running.

The system swap file has it's own datastore location. when the ussage of a host increases it causes several of my guests

to stop responding and I need to do a hard reboot

 

What bothers me also it that if I go to view the settings it shows in red and I have to select another location firs.

Is this normal.

 

I am running VMware ESXi, 6.5.0, 10884925 and the guest OS on servers are mainly Windows 2012.

 

Your help would be appreciated

 

Regards

Deon Loretan

deonl@mintek.co.za

Failed to get disks while trying to restore single files on Linux

$
0
0

Hi,

 

I'm trying to restore single files from VDP backup of linux machines (vSphere Data Protection Linux File Level Restore - VMware vSphere Blog).

 

Everything is at the most recent release. When I try to mount the backup in the flash client (https://VDP_APPLIANCE:8543/flr) I get this error:

Failed to get disks: Failed to determine the partition type. Verify that all the disks on the VM have valid/supported partitions 

 

I tried with nodes with various disk structures, but the error is always the same:

 

NAME   FSTYPE LABEL UUID                                 MOUNTPOINT
sda
├─sda1 ext4         749c6053-e217-4446-a753-bcfdc426df8c /
├─sda2 swap         b82985f9-e84f-41f3-8a29-9424bc14d513 [SWAP]
└─sda3 ext4         c0d3298e-e1bc-4e67-beef-c18176965959 /tmp
sr0

 

NAME             FSTYPE      LABEL        UUID                                   MOUNTPOINT
sda
└─sda1           xfs                      8b6d50d8-cfb8-4020-bfe8-3be977eba7c9   /boot
sdb
└─sdb1           LVM2_member              DohVGH-Nz0v-6LQq-Kjn7-VDRJ-yGfh-9jv7LP  ├─vglocal-root xfs                      c5999e7e-4711-4fbb-a166-0fe81594233f   /  └─vglocal-swap swap                     10ad0584-2a34-4cce-9007-11dece9c1c84   [SWAP]
sr0

 

NAME            FSTYPE      LABEL UUID                                   MOUNTPOINT
fd0
sda
├─sda1          xfs               5e350928-01d7-43b9-b325-ecad28742610   /boot
└─sda2          LVM2_member       3jZTd6-ic0Z-u8vZ-s7Qq-ZRa5-kvBg-Kimgfq  ├─centos-root xfs               6c10a246-9514-4491-a7fd-a74861638507   /  └─centos-swap swap              fb55c267-2440-4b4d-9671-de8d295cf121   [SWAP]
sr0

 

VDP manual says that at least ext4 (first try) should be supported. By the way, is it possible that XFS (the default RedHat filesystem) is not supported at all?

 

Thank you very much!

Horizon won't launch with Fedora 29

$
0
0

How does one get VMWare Horizon to launch in Fedora 29?

 

I installed it fine - accepted EULA, said "no" to all special options, said "yes" to a system scan for compatibility (which passed).  But when I click the icon to launch the program, nothing happens.

ESXi 6.5 + LAN Realtek

$
0
0

Помогите, бьюсь с проблемой третий день.

Имеется ПК со встроенными LAN Realtek 8168 и внешней LAN Realtek 8139.

CPU в версиях после 6,5 не поддерживается.

Установку провожу на голое железо с нуля.

 

В версии 6,5 эти сетевухи не поддерживаются соответственно встраиваю драйвера в загрузочный ISO скриптом в PowerShell.

Пробовал и ISO скачивать с одновременым встраиванием драйверов и из ZIP + драйвер делать ISO.

 

И в одном и во втором случаях получаются ISO с которых загрузка идёт. Стадия проверки совместимости проходит но в конце вылазит ошибка о том что требуются:

vmkapi_2_2_0_0, com.vmware.driverAPI-9.2.2.0 и vmkapi_2_0_0_0, com.vmware.driverAPI-9.2.0.0 для данных драйверов, а дальше только перезагрузка, и ничего не работает.

 

Драйвера качал от сюда https://vibsdepot.v-front.de/wiki/index.php/List_of_currently_available_ESXi_packages

пробовал 6.0.0  6.5.0    6.5.2   везде одно и тоже.

 

Подскажите что делать.

 

 

 

 


Problems while installing vmware tools on linux: "Mounting HGFS shares" and "VGAuthService" failed

$
0
0

Hello,

I tried to update an existing SUSE-Linux 9 on vmware Fusion 11 on High Sierra. The vm-machine was running fine with vmware 6 on macOS 10.10.

I logged in as root and ran ./vmware-install. This worked, but at some point I got the following messages, like shown in the screenshot.

I am not a total linux expert, but I need this old Linux mainly because of an perfectly working Lyx.

Thanks for all support!

EDIT: Would it make a difference, to start to install SUSE-Linux from scratch on Fution 11? (I would have do get an external DVD-Player first, that's why I am asking and not trying immediately...)

Bildschirmfoto 2019-01-01 um 14.48.06.png

NVMe disk/vmdk: Slower than iSCSI! Where is the claimed performance gain?

$
0
0

Hi guys,

 

I'm currently in my trail period of v14 before I pay for the upgrade from v12.5.

Most important feature is the performance gain of such a disk as by the VMware Workstation 14 Pro Release Notes

 

My first tests although are quite disappointing!

 

Preface

My host is Windows 7 x64 on a Dell Precision 7510 with 8 Core Xeon, 64GB RAM, two NVMe m.2 drives (Samsung 960 Pro & SM951).

I've installed the latest Samsung NVMe driver on the host and everything else is up to date as well.

All guests are up to date and all NVMe disks are shown correctly in the device manager (under the VMWare NVMe Controller).

All my test runs are done with a complete idle host and guest and are repeated a couple of times to ensure consistency.

 

My first test was done wit the VM on the "old" SM951 NVMe.

The VM/guest is a newly installed Windows Server 2016 with full VMWare tools installation.

I created 4 disks/vmdk with 60GB each:

171210 on old (comparison).png

The performance inside the guest isn't the same as on the host - not even close!

But whats more troubling is, that the SCSI-vmdk outperforms the new NVMe!!

 

I did another test with the VM/guest on the brand new 960Pro.

The VM is a Windows 7 with just one vmdk as a NVMe:

171210 on 960pro (comparison).png

Here the "loss" is even worse!

 

It even seems there there is a "cap" in the maximum performance on a NVMe-vmdk, doesn't matter how fast the physical NVMe is.

 

Does anyone have suggestions for reasons to this test results?

Can/Should I approach VMWare directly with this issue?

 

Thanks heaps!

Soko

Why does vSAN free capacity differ so much from VSA

$
0
0

Hi VMWare Community,

 

Would like to know the reason why vSAN and VSA free capacity differ so much.

Below is a picture as an example of my claimed. From what I can see VSA storage shows a free space of appx 5.6Tb worth of free space whereas vSAN capacity has a free space of appx. 17.6Tb.

Wonder if there is any problem with the configuration ?

Appreciate the clarification.

10gbE bottlenecked at 1gbE regardless

$
0
0

10gbE throughout my network but I'm bottlenecked at 1gbE. 

 

Here's my scenario:

ESXi 6 host 10gb fiber to procurve 6600 switch

vCenter Server 6.5 on a VM

Storage on iSCSI via Open Media Vault with 10gbE fiber to Procurve 6600 switch (LSI 9750 RAID6 11x 4TB enterprise 7200RPM disks)

Backup appliance (may or may not be pertinent here but it's where I'm ultimately attempting to get the 10gb throughput so I'll mention it)

 

Iperf 2 proves 7gb+ through to an Ubuntu VM on the host to the backup appliance

Iperf proves 7gb+ from VM to VM

Iperf proves essentially all links are 10gb capable

 

Jumbo frames are enabled throughout.  VM, switch, vswitch, vkernel, storage, appliance, etc.

 

However, when I transfer a file, vMotion, SCP, Backup via nbd transport, EVERYTHING is maxed out at 1G.  There are NO 1G links in the mix.  iSCSI is bound to the 10G link.

 

Is there a license for 10G within VMware?  Is there a setting in the web client I haven't found?  (I'm new to the vsphere web client and can't stand the networking section)  Something has to be in play here that I haven't been privy to.

 

 

Thank you

6.7U1 vs 6.5U2 passthrough regression

$
0
0

Hello.

 

I'm hopping someone has some clue why this is happening.

 

Using esxi 6.5u2, passthrough GPU (NVIDIA 2080 TI) works perfectly. On 6.7U1 it works until I reboot the Windows VM. After rebooting I get error code 43 and the only working fix is to reboot the host. The GPU passtrhough only works on the first VM boot. If I reboot the VM it stops working with error code 43.

 

Downgrading to esxi 6.5U2 fixes the issue.

 

Any suggestions ?

Viewing all 178257 articles
Browse latest View live




Latest Images